Every large enterprise is grappling with the same challenge: how to move AI from pilot to production. Here's a recipe that works.
I'm Marcus, I've been working with the most complex companies for 10+ years on IT, cloud and digital transformation and now AI adoption at scale.
Every large enterprise is now grappling with the same challenge: how to move AI from pilot to production, from a handful of enthusiasts to thousands of productive users.
The promise was massive productivity gains. The reality, for most, is a graveyard of PoCs, data sources that aren't structured well enough for AI to use reliably, sprawling costs across Copilot seats, embedded AI features in SaaS tools, and point solutions. All without tangible outcomes. Executives are asking where the ROI is and teams are asking what they're supposed to do differently.
AI vendors, meanwhile, are copying the playbook from cloud and data companies: send in forward deployed engineers (FDEs), understand the organization, build custom solutions, hope they scale. But AI is fundamentally different from cloud or data infrastructure. It can help users onboard themselves. Individual users who try it on a real problem see immediate value and fall in love with the tool.
And yet, many enterprises have already been burned. Vendors showed up with polished demos, ran a pilot, and disappeared. The FDE playbook sounds good in a sales deck but falls apart when those engineers leave and nobody internal knows how to keep things running. The field is changing too fast for that. New models every quarter, new capabilities every month. You need a partner iterating alongside you, not a vendor who ships and moves on.
A database migration or Kubernetes rollout doesn't create internal evangelists. AI does. The power users who try it on a real problem want to spread the word and they carry the institutional knowledge needed to bridge the gap between what AI can do in a demo and what it needs to do in production. They know the data, the workflows, the edge cases.
That combination, intrinsic motivation plus deep organizational context, is the unlock. The question is whether your adoption strategy is designed to find these people, equip them, and amplify their impact.
At Ona, we think we've found a recipe that does exactly that. One customer went from 20 weekly active users to 350 in three months.
Most enterprises default to one of two approaches. The first is gated and personalized: hand-pick teams, run bespoke onboarding, control every step. It produces happy pilots but doesn't scale as bandwidth runs out. The second is the opposite: buy seats for everyone, send a link, hope people figure it out. Most don't, get frustrated, and give up.
What works is a more balanced model that combines people, process, and product into a flywheel where each reinforces the other.
Our approach starts with two types of engagement that run in sequence.
Large-scale presentations, co-sponsored with the customer's leadership, spread awareness broadly. These aren't product demos. They frame the opportunity, show what's possible, and create a pull signal: people who want to try it.
Targeted workshops follow with the most interested participants. Small groups, hands-on, working on solving their actual problems by using Ona. Real repos, real data, real workflows. This is where something important happens: champions emerge naturally. Not because someone appointed them, but because they solved a problem that mattered to them and now want to help others do the same.
These champions or power users become the engine of adoption. They have two things that no vendor or central IT team can replicate: they're motivated to share what they've learned, and they have the internal knowledge and context to make it stick. They know which datasets matter, which workflows are painful, and which teams would benefit most. When they configure Ona for their team, they're encoding organizational knowledge into the platform, not following a generic template.
"I'm amazed how many things you can build now... I have some personal feeling to that — I want to pass my knowledge to others. Because this is a super important skill. Everyone needs to learn that." — Principal Data Scientist
Every large enterprise now has some version of an AI platform team. These teams have grown in importance with the advent of AI, but many are still perceived as gatekeepers who say no or slow things down.
We work with these teams to help them shift from blockers to enablers. The pressure to "control" is real, especially as token consumption costs grow. But control and enablement aren't opposites. Platform teams can guide users to maximize their impact while keeping costs manageable. Shared AGENTS.md files and organization-level skills spread best practices for common tasks across the organization. Power users, meanwhile, can contribute more specific configurations for their teams within a curated group.
This collaboration model avoids the trap you see in most large organizations: "us vs. them" between central IT and business units. Instead, platform teams set the guardrails and shared foundations. Power users build on top of them for their specific context. Everyone contributes and everyone benefits.
People and process get you started. Product is what makes it scale.
In Ona, a project is a template for a repository. One power user or platform team sets up the dev container, tasks, services, AGENTS.md, and skills.md files. Every other user who opens that repo, whether there are 20 of them or 200, gets their setup done before they type a single command.
All the language runtimes they need, the datasets required, data source connections via integrations or simple APIs, all pre-configured and ready to go.
This is the multiplier effect of champions. It takes one person with the right context to set up a project. Everyone else on the team (developers, data scientists, analysts) hits the ground running. The barrier to entry drops from "spend a day configuring your environment" to "open the repo and start working."
Automations let power users build event-driven workflows triggered by pull requests, tickets, or schedules. These run across tens or hundreds of repositories automatically.
We've seen automations unlock value in migrations (SCM, framework, cloud), security tasks like CVE mitigation, compliance scans, and documentation generation. A single automation, written once, can save hundreds of hours across an organization, and it runs without anyone needing to remember to trigger it.
If you want some inspiration for what automations can do and how to build them, check out our Automations Templates page.
More repos, more integrations, more data sources, more non-technical users. All of this increases the attack surface for prompt injection, data exfiltration, and unauthorized access attacks. As AI adoption grows, so does the risk.
Ona addresses this at multiple layers. Every environment runs in its own isolated VM, not a container, a full virtual machine with its own kernel. Your code and data never leave your VPC. Veto, our kernel-level enforcement layer, provides syscall-level controls that are content-addressable and unforgeable. Audit logs capture every action by every human or agent actor.
For regulated industries like pharma, insurance, or banking, this is the difference between wanting to scale AI adoption and actually being allowed to.
This is one of the areas we're investing in most heavily, and it's an area where we're setting the standard for the industry. The foundation is already in production with our enterprise customers: VM isolation, kernel-level enforcement, Datawall, and full audit trails. The next generation of protections (agent-untouchable, self-healing layers, and a credentials proxy) is on the way.
The adoption curves we're seeing with early enterprise customers are inspiring.
One customer went from 20 weekly active users to 350 in three months. Another grew from 80 to nearly 2,000 in six months. Cumulative prompts, a proxy measurement for actual usage depth, grew exponentially in both cases. One customer is now running 150,000 prompts per week.
But the numbers only tell part of the story. Here's what we hear from the people using it:
"I see Ona as a way of life. Anything I want to do, this is the first thing I turn on. I've even set my environment to three hours of inactivity rather than thirty minutes, because I want to keep things alive. This is going to be the way forward." — SVP of Engineering
"I finally did this very complex documentation that no one wanted to do in 20 minutes. Normally it would have taken me 4–5 hours." — Software Engineer
"Ona made me a vibe coder in just a week. And so it does with basically the whole community." — Science & Matrix Lead Automation
"The results were amazing. I'm a one-person IT team supporting our drug discovery data scientists, and I tested Ona on a 10k-line dataset. In 20 minutes it validated three hypotheses we were exploring and came up with two more we weren't aware of, more effective than a year of our work. Thank you for making this available to us." — Data Engineer
These quotes are from production deployments where adoption grew organically, not cherry-picked testimonials from a controlled pilot. They were shared because the product delivered value fast enough that people wanted to share it.
We're iterating forward based on feedback from customers and actively working on the pieces we think matter most:
Cost visibility. Enterprises need to see cost per user, per team, per use case. Not just to manage spend, but to understand where AI is delivering the most value and double down.
Value measurement. Beyond cost, surfacing the actual impact across metrics like time saved, tasks automated, and quality improvements, so that the ROI case writes itself.
The outstanding results from our early enterprise customers show us we're on the right track. But the landscape will look different in six months. That's why the relationship between vendor and customer has to go beyond transactional. The organizations seeing real returns are working with a partner who adapts as the field evolves and ships based on what customers actually encounter in production. AI adoption at scale is a people problem with a product solution, and the uncertainty of this moment makes the partnership part non-negotiable.
The infra that Stripe, Ramp, and Spotify built from scratch, you don't have to.
375 PRs merged. 67,000 lines of code. 1,067 tests. No human-written production code. Here's what 10 days of running a software factory taught us about the future of engineering.
The software factory is here. Now what?
This website uses cookies to enhance the user experience. Read our cookie policy for more info.