July 10, 2025AI

The enterprise version of Codex just dropped (but it's not from OpenAI)

You're not just coding anymore, you're orchestrating a symphony of AI working in the background.

Codex changed how developers think about coding agents. Spin up multiple agents, hand them different parts of a problem, and review the results when they're done. The shift from writing every line yourself to orchestrating agents working in parallel is real, and it's productive.

But that experience stops at the enterprise boundary. Most engineering teams can't use these tools at work because the infrastructure underneath — where code runs, how models are accessed, what security enforces — doesn't pass muster with their security and platform teams.

The enterprise requirements Codex wasn't built for

Codex is a developer tool optimized for individual productivity. It runs in OpenAI's cloud, on OpenAI's models. For personal projects and small teams, that's fine.

Enterprise engineering organizations have a different set of constraints:

These aren't edge cases. They're the baseline for any serious enterprise adoption.

Ona: the platform layer for enterprise agents

Ona gives you everything that makes Codex compelling — parallel execution, background processing, autonomous task completion — with the infrastructure enterprises require underneath.

Pairing with Ona using the VS Code editor option directly inside of Ona

Runs inside your network

A runner deploys inside your VPC on AWS or GCP. You provide your subnet, connect your source control provider (GitHub, GitLab, or Bitbucket), and choose a model — typically from Amazon Bedrock or your own inference endpoint. Source code and credentials never leave your infrastructure. Your security team can say yes to this one because there's nothing to negotiate: the data stays where it is.

Model-agnostic by design

The model landscape is moving fast. New models match or exceed the previous frontier every few months, often at a fraction of the cost. Tying your agent infrastructure to a single provider's model roadmap means you can't move when something better ships.

With Ona, you're not locked into any one ecosystem. You can bring your own model — whatever works best for your use case. Your infrastructure, your models, your rules. Models improve weekly if not daily. You should be able to swap without re-platforming.

Security enforced at the kernel

Agent security can't live at the application layer. Agents that can reason about their restrictions will work around them — not through jailbreaks, but through normal tool use.

Ona enforces restrictions at the kernel level with Veto, our enforcement engine built on BPF LSM. Every file access, network connection, and process execution is a syscall. The kernel resolves the real path behind every symlink. Enforcement is synchronous with execution. The agent can't bypass what it can't see.

On top of that: audit logging of every action, RBAC for both human and agent permissions, ephemeral scoped credentials per environment, and network isolation between sessions.

Full editor access

When an agent gets close but not quite right, you shouldn't have to rephrase the same instruction three different ways. With Ona, you open your actual editor — VS Code, JetBrains, Cursor, Windsurf — and jump into the same environment the agent is working in. Make your edits directly, then hand it back.

Pairing with Ona via a desktop editor like JetBrains, Cursor or Windsurf.

Same environment, same tools, same state. No re-setup, no drift between what the agent sees and what you see.

Reproducible environments, every time

Every agent session runs in its own isolated VM with a dedicated kernel, defined by Dev Containers. Dependencies install, services start, tools are available — the same way, every time. An agent is only as autonomous as its ability to verify its own work, and verification requires a known-good environment.

Background agents: where the real value compounds

The biggest shift isn't about which agent you use interactively. It's about what happens when agents run continuously in the background.

Stripe's Minions merge over a thousand agent-authored PRs per week. Ramp's background agent accounts for over half of all merged PRs. At Ona, agents authored 89% of the PRs we merged on main last month.

Ona Automations take this further: fully autonomous workflows that pick up work without a human kicking them off. A Sentry automation triages issues overnight and has PRs ready by morning. A CVE remediation workflow detects vulnerabilities on a schedule and ships the fix. A backlog picker scans your issue tracker daily and delivers merge-ready PRs before standup.

Each run executes in a full development environment, in parallel across repositories, with human review before anything merges. This is the capability that doesn't exist on a developer's laptop, and it's where the gap between individual tools and an enterprise platform becomes clear.

How Ona compares

FeatureCodexOna
Background execution
Agents can run in parallel
Runs in your VPC
Model-agnostic
Full editor access to environments
Ephemeral, reproducible environments
Kernel-level security enforcement
Automated workflows (Automations)
BitBucket and GitLab support
Enterprise features (SSO, audit, RBAC)

Get started

If you've experienced what background agents can do for personal projects, the question is how to bring that to your engineering organization — with the governance, security, and scale it requires.

Ona is available today. For enterprise deployments, talk to our team.

Join 440K engineers getting biweekly insights on building AI organizations and practices

Related blogs

This website uses cookies to enhance the user experience. Read our cookie policy for more info.