Ona Agents are purpose-built to utilize the security guarantees and isolation, as well as human-agent interaction, afforded by Ona Environments. This produces a power-tool for individuals and enterprise-level organizations alike; handling real-world requirements towards tooling, compliance and process. This page describes how we at Ona use Ona. Consider this a “best practice guide”, rather than a handbook one must follow. Start with these tips and make them your own; experiment and find out what works for you. Environment management showing one environment per task

Customize your setup

Ona Agents build on top of Ona Environments. These environments offer control and isolation, but require configuration to be effective.

Tools (dev container) and automations

devcontainer.json describes the tools needed to work on a particular codebase. Developers and agents share this setup. It’s helpful to use Microsoft dev container images and align tool versions with the CI pipeline (e.g. use an exact version of Java). Ona’s automations.yaml extends it with tasks and services that automate set up and reoccurring tasks. Our own monorepo setup contains automations for running the backend, frontend and rebuilding API code. Ona Agents can use these automations, and add their own e.g. for serving previews.

Write an AGENTS.md

AGENTS.md is a readme for agents. Ona Agents will pull this file into context for every conversation, making it an ideal place for describing:
  • Common commands, e.g. how to test or rebuild generated code
  • Key files and parts of the system
  • Where code style guides can be found, e.g. pointing to another file or a website
  • Branch naming conventions
This file has no specific format, and is ideally kept short and concise. You can refer to other files and Ona will read them when necessary. Here’s an example from our own repo:
## Guidelines
For PR creation guidelines, check dev/docs/pull-request-guidelines.md
For Go modifications, follow the rules in dev/docs/go-styleguide.md
For frontend modifications, follow the rules in dev/docs/frontend.md
For vscode changes, follow the rules in dev/docs/vscode.md

## Feature work
- use feature branches from main for pushing work following this naming pattern:
  - [2-3 initials from git config user.name]/[[numeric-part-of-issue-ID?]-][2-3 words shorthand of the topics, separated by dashes]]
  - should not be more than 24 characters total
  - extract initials by taking first letter of each word from git user.name (e.g., "John Doe" → "jd", "Alice Smith Johnson" → "asj")
IMPORTANT: Always run git config user.name first to get the actual name - do not assume or guess the initials

## Code generation
- ALWAYS use leeway scripts to generate code, e.g. `leeway run api/def:generate` instead of running `buf generate` directly.
- Use `leeeway collect scripts` to understand what code generation scripts are available.
Small changes can have a big impact. Like all prompts you should iterate on the effectiveness of your instructions. Ona’s use of Anthropic’s Sonnet 4 model makes the use of all-caps IMPORTANT and ALWAYS effective to give extra emphasis to some rules.

MCP servers

Ona Agents support stdio MCP servers configured in .ona/mcp-config.json. The format is aligned with other common MCP server configuration files, such as Claude Desktop or Cursor. We use MCP servers to enable richer GitHub interaction, and to connect to Linear. Note: some organizations might not allow the use of MCP, and can disable MCP support in the settings.

SCM integration: GitHub, GitLab, Bitbucket

Ona integrates with your SCM directly, e.g. to check out the code in environments. By default Ona Agents only have SCM via Git. Many workflows however benefit from deeper integration, e.g. to open a pull request. We use the GitHub MCP server to enable access to Pull Requests and GitHub action logs. The MCP server configuration format can read the GitHub token Ona has made available to Git:
{
  "mcpServers": {
    "github": {
      "args": [
        "run",
        "-i",
        "--rm",
        "-e",
        "GITHUB_PERSONAL_ACCESS_TOKEN",
        "ghcr.io/github/github-mcp-server"
      ],
      "command": "docker",
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "${exec:printf 'protocol=https\nhost=github.com\n' | git credential fill 2\u003e/dev/null | awk -F= '/password/ {print $2}' 2\u003e/dev/null}"
      },
      "toolDenyList": [
        "search_code"
      ]
    },
    // more servers here
  }
}

Linear and Jira: User secrets

We use Linear to organize our work, many of our customers use Jira. User secrets together with a Linear MCP server make this easy to set up. For example this Linear MCP server works when a user has a LINEAR_API_KEY environment variable configured as secret.
{
  "mcpServers": {
	  "linear": {
      "args": [
        "serve",
        "--write-access=false"
      ],
      "command": "/usr/local/bin/linear-mcp-go",
      "name": "linear"
    }
  }
}

Command Deny List

Policies configuration showing MCP toggle and Command Deny List with aws * blocked Ona Agents operate with a command deny list that’s enforced across an entire organization. We have, for example, denied Ona from using AWS commands using aws * in the deny list. This way, should an engineer accidentally sign into production, we are certain no unforeseen database deletions occur.

Custom Slash Commands

Slash commands interface showing available commands like /pr, /clear, /ketchup, /commands, /support-bundle Slash commands are organization-wide prompts that encode how a team operates. Use them to raise pull requests, review our code and write documentation. Ona comes with a set of built-in commands, notably
  • /clear which resets the conversation (asks for confirmation)
  • /commands which is available to org admins and
  • /support-bundle which produces a support bundle for when Ona doesn’t work as intended.
We use slash commands extensively in our workflow and they’ve significantly helped structure our work, for example:
  • /pr to raise pull requests
  • /weekly-digest to get a summary of changes in the last week
  • /fix-ci-build for fixing CI builds on branches
    Understand if there's a Pull Request for your branch.
    If so, investigate the latest GitHub action build for this branch, and understand if and why the build failed.
    Design a minimal fix to make the build pass and implement the fix. If at any point you are uncertain, ask the user.
    DO NOT push your changes.
    
  • /create-runbook to write new runbooks consistently
    
    Create a new runbook for Gitpod following these guidelines:
    
    1. **Use the template from `docs/runbooks/_TEMPLATE.md`**
    2. **Reference resources from `docs/runbooks/_resources.md` for dashboard and log links**
    3. **Look up technical details in the codebase as needed**
    4. **Follow the patterns from existing runbooks in `docs/runbooks/`**
    
    ### Required Information
    
    If not already provided, ask the user for:
    
    **Service/Alert Name:** What service or alert type is this runbook for? (e.g., "Runner Hibernation Issues", "High AWS Error Rates")
    
    **Alert Details:** Please provide the alert names and their Prometheus queries:
    Example format:
    Alert Name: High Error Rate
    Query: sum(increase(http_requests_total{status=~"5.."}[5m])) > 10
    
    Alert Name: Failed Connections  
    Query: sum(connection_failures_total[5m]) by (region) > 5
    
    **Additional Context:** Any specific technical context, dependencies, or special considerations? (e.g., "MemoryDB used for state management", "ElastiCache used by agents reconciler", "Only affects managed runners")
    
    ### Output
    
    Generate a complete runbook including:
    - Proper quicklinks using actual dashboard/log URLs from `_resources.md`
    - Investigation steps with specific AWS accounts and regions
    - Appropriate escalation team (Hosted Compute/Core/Agents)
    - Actionable mitigation steps
    - Suggested filename and location
    
    **Start by asking for any missing information from the Required Information section.**
    
  • /changelog-ona-swe-agent to produce user-facing changelog and Slack announcement for Ona SWE agent releases
    Generate user-facing changelog and Slack announcement for Ona SWE agent releases.
    
    ### Process
    1. Extract stable commit
    2. Extract candidate commit
      2.1 If not provided suggest the candidate
    3. Run git log commands: `git log stable_commit...candidate_commit -- ai-agents`
    4. Filter for Ona-relevant, user-facing changes only
    5. Create changelog.md and Slack announcement
    
    ### Format
    **Changelog:**
    Ona SWE Agent {{version}} Changelog
    
    ### 🚀 Features
    ### 🐛 Bug Fixes  
    ### 🔧 Engineering
    
    **Version**: {{version}}
    **Commit**: [commit]
    **Previous**: [stable version]
    
    **Slack:**
    🚀 **Ona SWE Agent {{version}} is now available!**
    
    **Key highlights:**
    • Feature 1
    • Feature 2
    
    Full changelog: [link]
    
    Don't use icons.
    
    Use GitHub PR links: `[#XXXX](https://github.com/gitpod-io/gitpod/pull/XXXX)`
    Focus on user impact, group logically, keep Slack concise (5-6 points max).
    
  • /catch-up to catch the agent up to changes in an environment
    Understand the changes in this environment compared to main.
    What files have changed and why? What's the user's intent?
    

Ask Ona

For our team, “ask Ona” has become as natural as saying “Google it”. While the phrase covers any interaction with Ona, it’s most commonly used when we encourage each other to inquire about our codebase and systems directly.

Codebase inquiry: “How does X work?”

Ona Agents work are very capable of understanding a codebase. Questions we ask Ona span the entire spectrum of our platform, e.g.:
  • What SSO providers does our platform support?
  • What kinds of access tokens does our backend issue?
  • What patterns do we use for testing with fake clocks in Go?

Developer onboarding

New developers joining Gitpod experience Ona as their onboarding companion. Instead of consuming senior engineers’ time with exploratory questions, they engage directly with the codebase through Ona. E.g. someone joining the data team would use Ona Agents to understand how analytics events are emitted, what events exist, what naming conventions are, and how events are tested. New team members can explore at their own pace, ask “naive” questions without hesitation, and build their mental model of the system iteratively. They’re learning by interrogating the actual code, not outdated documentation or someone’s potentially incomplete recollection.

Weekly Digest

We generate a “weekly digest” that analyzes our codebase evolution. It’s an assessment of our code’s direction, identifying churn hotspots, technical debt accumulation, declining test coverage, and API inconsistencies. The digest tracks velocity and change patterns, highlighting frequently modified areas and unusual activity signaling architectural stress.
Generate a focused weekly digest that gives the team actionable insight into our software engineering activity.

Structure the digest as follows:

- Team Output and Productivity: Summarize total commits, merged pull requests, and notable achievements. Highlight metrics that reflect productivity—such as lead time for changes and the number of reviewed issues.
- Code Churn Focus: Identify the three files with the highest code churn this week—that is, those with the largest combined lines added and removed. For each file, list:
    - The complete file path
    - Total lines changed (added plus removed)
    - A concise summary of the nature of the changes, inferred from commit messages
- Inconsistencies and new patterns: Find code patterns, library use and API definitions that are inconsistent with previous code.
- Clarity and Value: Present all information in a straightforward format that makes it easy for the team to see where the most change and activity occurred, and to spot potential hotspots for refactoring or further review.
- Shareable Output: Format the digest so it can be posted or forwarded—use clear section headers and bullet points.

Leverage available repository and project tracking data. Prioritize simplicity, clarity, and actionable reporting.

Also, give me histogram of commits by author. Note: we squash PRs when they land on main.

Measuring Ona’s adoption

When Ona Agent contributes to code changes, it marks itself as a co-author. By tracking these contributions we understand how effectively we’re integrating them into our development workflow. At the time of writing, Ona co-authors 60% of our engineering teams commits to Ona.
What's the percentage of commits with Ona contributions merged to main in the last four weeks?
Give me a daily histogram and weekly average.
Beware that we squash commits when we merge to main.

Writing code

Ona Agents’ primary use is writing code: fixing bugs, adding features, improving tests, enhancing code quality. Any modification previously made using a text editor, can now be orchestrated through Ona. But like any powerful tool, wielding it effectively requires understanding and skill. The most common pitfall we see is under-specification. Instead of saying “fix the frontend tests” be precise: “Fix the flaky frontend tests for the login page where the button selectors aren’t being found correctly.” Specificity is a superpower.

Explore, plan, build: tackling system-wide changes

For substantial changes that span multiple components, the explore-plan-build workflow works well. Think of it as a three-phase journey where you progressively build context and confidence.
  1. Explore: Start by having Ona understand the system. You might prompt: “Examine the authentication flow, including failure modes, security features, database connections, and audit logging. Give me a report of your findings.” This exploration serves dual purposes—it helps both you and Ona Agents build a mental model of the codebase.
  2. Plan: With context established, design the change. For example: “We need to add audit logs to every login attempt. Ensure this doesn’t impact failure rates and maintains compatibility with all SSO providers. Design a change that achieves this. A particularly effective pattern here is asking Ona to write the design to a markdown file, then iterating on it in VS Code. Whenever you update the file, tell Ona explicitly to incorporate your changes. This creates a collaborative design loop where you maintain control while leveraging Ona’s capabilities.
  3. Build: Once the design feels right, implementation requires nothing more than “implement it.” Ona already has the context and plan—now it executes.

Bottom-of-the-backlog drive-by changes

Issues that languish at the bottom of any backlog—copy changes, padding adjustments, missing test cases, absent log statements—now get fixed immediately. We regularly go straight from identification to pull request. The key pattern here is learning from existing code. Instead of specifying every detail, leverage what’s already there: “Add a test case to the agent tests that verifies the out-of-token failure mode. Understand the existing test cases and add one that’s highly consistent. This approach works beautifully for:
  • Missing entries in table-driven tests
  • Additional logging or metrics
  • UI consistency fixes between desktop and mobile
  • Small but important quality-of-life improvements
By asking Ona to generalize from existing patterns, you ensure consistency while moving fast.

Ona as documentation writer

We treat documentation as code. Good documentation, like good code, is consistent in tone and structure. We’ve built a /docs-writer command that guides someone writing documentation through the produce so that we all write in one voice. Our own documentation is written using Ona using a prompt similar to the one below. The docs writer prompt contains instructions for an interactive conversation, rather than trying to one-shot the result. Next to instructions like “Write like Strunk and White’s Elements of Style” it contains a workflow section encouraging the agent to ask clarifying questions.
Technical documentation specialist who creates clear, accurate documentation following industry best practices. Never invent facts—always request clarification when details are missing.
Only operate in ./docs/gitpod

### Writing Style
- **Clear & Concise**: Simple language, short paragraphs, bullet points for complex procedures
- **Active Voice**: "Click Save" not "The user should click Save"
- **Consistent Terms**: Uniform naming for UI elements, commands, parameters
- **Define acronyms** on first use

### Content Structure
1. **Title**: Short, descriptive
2. **Introduction**: 1-2 sentences + prerequisites if needed
3. **Instructions**: Step-by-step with subheadings/bullets
4. **Examples**: Working code samples with proper formatting
5. **Visuals**: Suggest screenshots/diagrams where helpful (mark as "_Insert screenshot here_")
6. **Troubleshooting**: Common issues and solutions
7. **Next Steps**: Related docs, advanced guides

### Best Practices
- **Proactively ask questions** for missing/ambiguous info
- Provide **fully working code examples** with syntax highlighting
- Maintain friendly but succinct tone

### Critical Terms
- **Gitpod** (never GitPod)
- **Dev Container** (never devcontainer)
- "Self-hosted" = "in your VPC" or "bring your own cloud"
- Base URL: https://app.gitpod.io
- Workspaces = environments

## Workflow
1. Review provided content
2. Identify gaps/ambiguities
3. Ask clarifying questions (e.g., "Is this feature v1.2+ only?")
4. Generate updated draft with:
   - Clear structure
   - Code samples
   - Visual suggestions
   - Troubleshooting tips
5. List any remaining questions

## Key Reminders
- **Never hallucinate**—ask for details instead
- Use direct language ("Click Open")
- Enrich with examples and references
- Encourage feedback/contributions
- Keep docs current and alive

When uncertain about any detail, explicitly request clarification rather than guessing. Focus on creating documentation that helps users succeed quickly.
You can also use Ona to generate technical documentation that lives alongside your code. Though remember: markdown files in your repository are essentially a cache of information. Ona can understand your codebase on demand, which has the advantage of never going stale. Choose your approach based on whether you need immediate human readability or dynamic accuracy.

Preview before you ship

For frontend changes, seeing is believing. Ask Ona to provide a preview, and it will spin up a service (visible in your environment details) with the appropriate port exposed. We’ve used this successfully with React, Vite, and Storybook. The same is true for backend changes. Since adopting Ona we have found even more value in our test harness. The verification of changes helps build confidence in the changes Ona is making, and helps Ona verify its work.

Progressive engagement: Conversation → VS Code Web → Desktop IDE

The VS Code Web next to any conversation simplifies reviewing changes and fine adjustments. For more complex changes one needs to go deeper. From that, a pattern has emerged where we’ll progressively engage with code depending on the complexity of the change.
  • For simple changes the conversation and summary instill enough trust to raise a pull request directly.
  • For anything else VS Code Web goes a long way. Particularly the combination of manual edits and Ona Agents is very powerful.
  • Working in full-manual mode, i.e. a desktop IDE is only necessary for deep mono-focused work; e.g. when we need to establish a new pattern and want to do that manually, we’ll move back to a desktop IDE.

Raising the Pull Request

Teams have standards for pull requests—templates to follow, checks to run, issues to link. We’ve encoded ours in a /pr command. When an engineer is satisfied with their changes, they simply type /pr and Ona walks through the entire process:
  • Committing changes to an appropriately named branch
  • Linking to the relevant issue
  • Generating a meaningful description
  • Adding testing instructions
  • Ensuring all team conventions are followed
The pull request becomes the natural conclusion of your work with Ona, not a separate, manual process that breaks your flow. Our own PR prompt is
Raise a draft PR for a branch starting with my initials following this template.
Get my initials from the configured git username. Check for manual files changes before creating the PR description. 
Make sure you capture all changes in this environment.

## Description
<!-- Describe your changes in detail -->

## Related Issue(s)
<!-- List the issue(s) this PR solves -->
Fixes <issue>

## How to test
<!-- Provide steps to test this PR -->

Review code

Writing code has become so effortless that reviewing is the new bottleneck. Developers previously spend spent hours crafting changes, and now produce them in minutes. As a result PR queues have exploded, and code review becomes the constraint that limits velocity.

Draft PRs have become our pressure valve

Rather than waiting for perfect, review-ready code, we lean heavily into GitHub’s draft PR mechanism. Developers push early and often, letting reviewers peek into work-in-progress while the AI agents continue iterating. This parallel processing means feedback arrives while changes are still malleable, not after significant investment in a particular approach.

Tests matter more than implementation details

Our reviewers prioritize tests over implementation details. Tests define the expected behavior and quality standards. With AI-generated code, tests become the most crucial component to review. While reviewers still check database patterns, security, and performance, well-written tests that comprehensively verify functionality indicate acceptable implementation.

Consistency through context

Ona Agents has access to the same coding guidelines and style documents that developers use. We maintain these as markdown files directly in our repositories. This shared context means both the code author and reviewer can query: “How well does this match our established patterns?” The AI handles the mechanical consistency checks that free reviewers to focus on architectural decisions and business logic.

Review like Mads: encoding institutional knowledge

We’ve capturing the review style and expertise of our best reviewers, turning their institutional knowledge into accessible prompts. Our prime example: “Review like Mads.” Mads is our frontend lead, and his reviews are thorough, insightful, catching the subtle issues. Rather than hoping everyone develops his instincts through years of experience, we’ve encoded his review patterns into a prompt that anyone can invoke. The process is straightforward but powerful:
  1. Analyze the last 30 days of Mads’ PR reviews
  2. Identify patterns in what he values and comments on
  3. Understand his specific areas of focus and common corrections
  4. Transform these insights into a structured review prompt
  5. Make it available as a /review-like-mads command
Here’s the meta-prompt we use to generate these reviewer-specific prompts:
Analyze the pull request reviews from [REVIEWER_NAME] over the past 30 days. Focus on:

1. Review patterns and style:
   - What aspects of code does [REVIEWER_NAME] consistently examine?
   - What's their commenting style—direct, questioning, suggestive?
   - How do they balance criticism with encouragement?

2. Technical focus areas:
   - Which code patterns do they frequently flag?
   - What performance or security concerns do they raise?
   - Which best practices do they enforce?

3. Common corrections and suggestions:
   - What mistakes does [REVIEWER_NAME] repeatedly catch?
   - What refactoring patterns do they suggest?
   - Which architectural principles do they advocate?

4. Values and priorities:
   - What non-functional requirements matter to them (readability, maintainability, performance)?
   - How do they weigh tradeoffs?
   - What makes them approve enthusiastically vs. reluctantly?

Transform these observations into a structured review prompt that captures [REVIEWER_NAME]'s review methodology. The prompt should guide an AI to review code with the same attention to detail, technical standards, and communication style.

Format the output as a prompt that begins with: "Review this code as [REVIEWER_NAME] would, focusing on..."
Beware that the above requires the GitHub or GitLab MCP server to be set up.

Giving context

Ona Agents are great at understanding intent, but it cannot read minds. Much like one guides a junior engineer, Ona needs guidance.

Explicit TODOs

Ona Agent’s planning helps the agent maintain focus, and users to understand progress. Ona will- at the beginning of a task - plan it’s todos. You can at any point in the conversation ask Ona to add or modify todos to influence that plan. While you can always stead Ona throughout the conversation, using todos is particularly useful to add additional steps. For example:
  • Add a todo item to review and simplify your code
  • Remove the next todo item
It’s rare that you need to talk to Ona about todos directly, but it’s a powerful tool to stead the agents behavior.

Give Ona URLs

Ona agent reading external documentation (Anthropic prompt caching docs) and creating todos Ona Agents can read from web URLs. It will use that ability to read documentation (e.g. Go docs), but can also be explicitly instructed to do so. We use this a lot to understand a new API, service or best practices. For example, the initial draft of our token caching system was created using the “Analyze, Plan, Build” workflow, asking Ona to read the Anthropic reference first.

Use screenshots

Ona can view PNG images in its environment. It’s easy to upload them into an environments file system using VS Code Web: simply drop it into the file browser. You can then ask Ona to interact with that screenshot.

Run shell commands directly

Sometimes asking an LLM is more effort than just running the command directly. VS Code Web on the side makes that easy to do. There’s an even quicker way: !pwd runs the command directly instead of instructing the agent. The command and output become part of the agent’s context. This is a powerful tool to explicitly give context to the agent, e.g. a particular file listing or test failures. It also ensures the agent and you are on the same page. Bang command execution showing !pwd command running directly in Ona interface

Go parallel

Ona Agents run in parallel; multiple agents working on different things at the same time, in different environments. Ona’s interface is built to support this parallelism, by lowering the cost of context switching.

Start multiple tasks

When starting work on the Home page, using the prompt box, pressing Enter will redirect to new the conversation. Cmd+Enter (Ctrl+Enter on non-mac) will start the agent, but stay on the Home page retaining all context. This is useful for rapidly firing off bits of work one after another.
A key element is the 90/10 rule: let Ona Agents produce 90% of the work, and drive home the remaining 10% yourself. You’ll get the most out of your time if you start the tasks so that you can “harvest” their results in a staggered fashion. I.e. focus on the 10% of one task while the others still work autonomously.

One environment per task

As engineers we are used to making new branches per task. With Ona, we strive for one environment per branch. This way environments are completely isolated from each other - you or Ona can make many changes against the same repo without file system conflicts. Ona Environments are designed to be ephemeral, and will be automatically deleted after some time. Treat them as disposable resources, not as something to maintain. They auto-stop after some time of inactivity to save cost; and auto-start when the agent becomes active, and can be manually started using the toggle in the environment details. Environment management showing one environment per task

/clear for a clean slate

During a conversation we might learn what we should have asked/prompted in the first place. Ona Agents can also go astray. To retain in-environment context, e.g. modified but uncommitted files, some database state or one-off networking setup, /clear is helpful. It resets the conversation with the agent, but keeps the environment as is. This is an escape hatch, not a recommended way of working. The /clear command interface for resetting conversation

On Mobile, on the go

The best development environment is the one you have with you. Ona works well on a phone—not as a compromise, but as a genuinely useful extension of our workflow. Open app.ona.com on your phone and work just like you would on your laptop. The autonomous nature of these agents makes mobile interaction surprisingly natural. We’re not manipulating code on a tiny screen; but have a conversation about intent and reviewing outcomes. We’ve found ourselves compulsively checking in, while waiting for a train, during lunch or after putting the baby to sleep. Especially on the go, being able to kick of some prototype idea is immensely powerful. Many ideas you see in Ona today were created from a prototype produced while waiting for the bus or for dinner to be ready.

Optimize your flow

Ona is a very effective tool, and like any power tool provides great results from the start which get better as you learn the tricks of the trade.

Be very explicit

Ona Agents performs best when it has very explicit instructions, to the point of over-specification. Providing clear instructions and pointers considerably increases the duration of autonomous sensible work. For example:
Not GreatGood
add a new database entity fooadd a database entity foo for the backend which has a name, description and magic value. Users will search for the magic value, but name and desc are only shown. The magic value cannot be changed and is a number from 1 to 100.
add a new endpoint to our backend that allows users to get all of their purchasesTake a look at purchase.go. This file contains the definition for an endpoint that returns a single purchase by ID. Please note how authorization is done and how the tests in purchase_test.go look - these are in line with our best practises. Add a new endpoint (GET /purchases) in the same file to get all of a user’s purchases following the same authorization and testing best practices.
explain the codebaseAnalyze this codebase and explain its general structure, architecture, and key components. What are the most important things a developer should know to get started? Include information about the tech stack, main directories, entry points, and any critical patterns or conventions used.
the sidebar height changes when conversations startUnderstand how:
• we currently keep a stable height for sidebar entries despite font weight changes
• the progress counter pill changes the height of the sidebar item
• we could avoid that height change in line with existing patterns.

Implement that fix.
Learning to specify your intent is the key skill to master when using AI SWE agents to write code.

Actively encourage reflection

For larger changes, or when there are multiple solutions, it’s useful to ask Ona Agents to reflect on their work. This leads to simpler code and the discovery of misunderstandings. Use something like: review and critique your changes. How can you simplify things?

Course correct, stop if needed

Join the conversation at any time you feel the need to course correct, Ona Agent will pick up the message whenever it can. Should things go off the rails, click Stop or press ESC. You can continue the conversation right where you left off by sending a (potentially corrective) message. Agent interface showing task input with "Describe your task" placeholder

90% agent, 10% human

Don’t try and make very fine adjustments using the agent; sometimes opening a text editor and moving that line is more efficient. Follow a 90%/10% split:
  • Use the agent to handle the bulk of the changes, letting it do the heavy lifting. Start by giving precise instructions and encourage questions from the agent.
  • Finish and refine your work using VS Code web for precise control. We have seen a lot of milage for using the diff view of VS Code web for doing an initial review of changes.
  • Avoid trying to make small “micro-adjustments” through the agent - these are better handled directly.
90/10 rule illustration showing agent handling 90% of work while human handles 10% Starting an adjustment, e.g. a refactoring, and asking Ona to understand and complete the changes is very effective way of working.

Commit early, commit often

Git can act as a checkpoint system, enabling rollback should things have gone the wrong way. Any changes that are functional, but maybe incomplete, should be committed; Ona can always rewrite the history or squash the commits once the change is complete. Asking Ona to commit its changes, is a fine way to maintain a good history. Ona will write sensible commit messages that help keep track of the work. In practice we rarely need to manually revert back to a prior commit, and Ona will use the commit history to course correct itself. As a rule of thumb: the larger the change you’re trying to make, the more often you want Ona to commit. Be explicit when, what and how often you want Ona to commit. We’ll use prompts like
  • Commit all your changes, once
  • Do not commit or push
  • Keep committing whenever you think is a good time

Ask for a preview

Ona Environments are independent of your laptop, hence can serve previews of your work that are accessible beyond local host. Ona providing a preview of frontend changes Ona will try to provide a preview when it things that’s helpful. You can also explicitly ask for one:
Provide a preview for the front end changes you just made.
The generated URL can be shared with friends and colleagues. Once you stop or delete the environment, the URL will stop being available. Notice how preview URLs show up as “Preview Server” service and open port on the environment details page. You can, at any time, start/stop the service and end sharing of the preview by closing the port. We have found previews to be immensely useful and I want to highlight two use-cases:
  1. Developing for mobile Suppose you’re making front end changes and want to try them out on mobile. You can have the conversation on your phone directly or on the desktop. Asking Ona to give you a preview URL and then opening that link directly from the conversation on your phone is a very powerful way to view the changes you’re making.
  2. Before vs After Specifically for design work and fine adjustments, a before and after view is very helpful. You can ask Ona specifically to produce a before and after page of some changes and give you a preview for it.
    Generate a standalone demo page that visually compares the Before vs After state of my changes. 
    
    **Requirements:**
    - Two side-by-side panels: ❌ Before and ✅ After
    - Exact measurements labeled (e.g., [old-value] vs [new-value])
    - Visual rulers showing width/height differences where applicable
    - Code snippets comparing old vs new implementation
    - Technical Details section with Problem, Solution, and Benefits
    - Ona theme styling using CSS variables (dark theme)
    - Responsive design for mobile viewing
    
    **Process**
    1. Create the demo HTML page
    2. Start a preview server for the demo HTML page
    3. Report the URL of the preview server to the user
    
    **Template Structure:**
    - Header with component name and description
    - Side-by-side comparison panels
    - Visual measurement indicators
    - Code diff snippets
    - Technical details section
    - File change information