Building a coding agent is challenging.
Something shifted over the Christmas break in 2025.
The internet got time to experiment with coding agents, and many discovered the incredible productivity that comes with autonomous coding agents, particularly when running them in parallel or the background. Unlike code assistants in your editor, agents can run autonomously for long periods of time, executing test suites, compiling builds and iterating towards task completion entirely autonomously, without explicit human supervision.
Naturally, many organizations are reaching the same conclusion about these agents:
‘We can’t run agents like this on our developer laptops anymore, we need sandboxes’
Security is often the principal concern and main motivation for wanting agent sandboxing. That said, sandboxing is also a pre-requisite to productivity gains, as when you run agents remotely you can then scale them horizontally, run them 24/7 and wake up/shut them down on schedules. Agent sandboxing then enables organizations to automate anything and everything with agents from builds that fix themselves, scheduling 10 code refactoring pull requests to run over night, and fleets of agents that do mass refactoring across 1000’s of legacy repositories. At this point, teams face a choice: to either implement sandboxes themselves or engage a partner.
We’ve also observed an alarming trend of organizations reflexively taking internally available infrastructure primitives such as Kubernetes or CI platforms and adding coding sandbox functionality without considering the ‘day 2’ and total cost of ownership implications. At Ona, we’ve seen first-hand what happens when you force stateful, interactive workloads into infrastructure that it was not designed for, which is we’re why we left Kubernetes.
Autonomous agents are unusual workloads that you’ve most likely never supported before.

They aren’t short-lived applications you can bundle into a container and forget about. They’re long-running, stateful, and interactive, executing arbitrary code with real side effects. That combination forces you to think about isolation boundaries, security models, and lifecycle management in ways most existing infrastructure was never designed for.
What you’ll need that you’re likely not thinking about:
And that’s before you get to everything else: persisting volumes for start/stop, backup and recovery, timeouts and resource limits for cost, tight integration with source control, image registries, identity providers, and all the editors from JetBrains to VS Code to Cursor. Secrets and environment variables, role-based access controls, MCP support, LLM and API gateway integrations, caching and performance tuning. Not to mention that building this infrastructure means committing to owning and maintaining mission-critical infrastructure. If the system goes down your entire engineering team grinds to a halt.
At this point even after seeing the scope some teams will still be tempted to think that coding sandboxes can fit into their existing infrastructure: ‘surely it can just run in a container, or in CI, or on an existing Kubernetes cluster’. This is where many teams discover—and often too late—that the most ‘obvious’ infrastructure primitives turn out to be the least suitable. Because agents don’t behave like the workloads those systems were designed for and the mismatch only becomes obvious once you’re deep into implementation.
Containers, CI pipelines, Kubernetes and virtual machines are the primitives most teams reach for. But each encodes assumptions that break down with autonomous, stateful coding agents.

What looks like a safe choice can often becomes a source of friction once agents start running continuously and at scale for reasons you might not predict. The mismatch doesn’t show up immediately but instead shows up later down the line in complexity and workarounds.
As you can see the obvious runtimes all fall short in different ways. For organizations who are focused on capturing the value from coding agents by rolling out AI software engineers, running them in parallel, and applying them across the business the faster path is to treat sandboxing as something you adopt, not something you build, and focus internal effort on deploying your virtual workforce rather than reinventing the foundation. That’s where Ona helps.
Ona is built for organizations that want to get on with deploying AI software engineers. If your goal is to run agents in parallel, in the background, and across real work without spending months building infrastructure, Ona is designed for you. Ona removes operational overhead as it’s both self-hosted, but not self-managed. Ona runs inside your environment or VPC but all without you having to operate any of hte infrastructure. You focus on deploying your hybrid workforce. Ona handles the rest. So, if you want to spend the next year deploying AI software engineers, not building sandbox infrastructure then try Ona for free (https://app.gitpod.io/) or get a demo (https://ona.com/contact/demo).
This website uses cookies to enhance the user experience. Read our cookie policy for more info.