Skip to main content
Ona Agents are AI-powered software engineers that work in your environments. They can read code, run commands, modify files, and execute complex tasks - all within the secure, isolated context of your Ona Environment.

How agents think and work

Agents follow a structured approach to solve problems: When you give an agent a task, it:
  1. Understands what you’re asking for
  2. Explores your codebase to gather context
  3. Plans how to solve the problem
  4. Executes the changes (writing code, running commands)
  5. Verifies the work (running tests, checking output)
This matches how experienced developers work.

Agent capabilities

An Ona Agent operates like a developer with full environment access:
  • Read and write files - View code, create new files, edit existing ones
  • Execute commands - Run builds, tests, linters, deployment scripts
  • Search codebases - Find functions, classes, patterns across your code
  • Open pull requests - Create branches, commit changes, push to remote
  • Use bash commands - Run !git status, !npm test for quick context
  • Respond to feedback - Iterate based on your guidance
The agent sees everything in your environment: your code, tools, configuration, and running processes.

Talking to your agent

Ona Agent works out of the box with zero configuration. Open your environment at app.ona.com, start a conversation in the chat panel, and give it a task. The agent works autonomously in your environment, reading code, making changes, and running commands. When it’s done, it can open a pull request for your review. As you use it more, you can teach it your codebase conventions with AGENTS.md, create reusable workflows with skills, and configure guardrails for safe execution.

Practical scenario 1: Add error handling

Real development task: Your API endpoints need better error handling.

The prompt

Look at the POST /api/transactions endpoint in backend/server.js.
Add input validation before processing the transaction:
- Validate that portfolio_id and stock_id are provided
- Validate that quantity is a positive number
- Validate that price_per_share is a positive number
- Validate that transaction_type is either "BUY" or "SELL"
- Return 400 with a descriptive error message for invalid inputs
- Follow the error handling pattern already used in the SELL branch

What the agent does

  1. Explores: Reads backend/server.js to understand the existing transaction endpoint and error patterns
  2. Plans: “I’ll add validation checks at the top of the handler, before the transaction logic runs”
  3. Executes: Adds validation code with appropriate 400 responses
  4. Verifies: Tests the endpoint with curl to confirm valid and invalid requests behave correctly

Why this works

  • Specific: You named the exact file and endpoint
  • Examples: You referenced an existing pattern in the same file
  • Clear outcomes: You listed the specific validations to add

Practical scenario 2: Write tests

Real development task: The backend has no test coverage.

The prompt

The backend API in backend/server.js has no tests.
Create a test file backend/server.test.js using the built-in Node.js test runner.

Cover these cases for POST /api/transactions:
- Successful BUY transaction
- Successful SELL transaction
- SELL with insufficient shares returns 400
- Missing required fields

Also test GET /api/portfolios/:id:
- Valid portfolio returns holdings
- Non-existent portfolio returns 404

Run the tests to make sure they pass.

What the agent does

  1. Reads backend/server.js to understand the API routes and data flow
  2. Checks backend/package.json for available test tooling
  3. Writes test cases covering the specified scenarios
  4. Runs the tests to verify they pass
  5. Shows you the test results

Why this works

  • Specific file and endpoints named
  • Test framework specified
  • Test cases enumerated
  • Verification requested (run tests)

Practical scenario 3: Add a new feature

Real development task: Add a portfolio creation endpoint and UI.

The prompt

The app can display portfolios but there's no way to create one from the UI.

1. Add a POST /api/portfolios endpoint in backend/server.js that accepts
   a JSON body with "name" and "description" fields. Validate that "name"
   is provided. Return 400 if missing.

2. Add a simple form component in frontend/src/components/CreatePortfolioForm.jsx
   that lets users type a name and description, then submits to the new endpoint.
   Follow the style of TransactionForm.jsx.

3. Wire the new form into frontend/src/App.jsx.

Test the endpoint with curl, then verify the form works in the browser.

What the agent does

  1. Reads backend/server.js and frontend/src/components/TransactionForm.jsx to understand existing patterns
  2. Adds the new API endpoint with validation
  3. Creates the form component matching the existing style
  4. Updates App.jsx to include the new form
  5. Tests with curl and checks the frontend renders correctly

Why this works

  • Named the problem: No way to create portfolios from the UI
  • Suggested structure: Listed the backend and frontend changes separately
  • Referenced existing patterns: “Follow the style of TransactionForm.jsx”
  • Verification: “Test with curl, then verify the form”

Using bash commands for context

Agents can run bash commands mid-conversation using the ! prefix:
!git status
!npm test
!ls src/components
!grep -r "TODO" src/
This lets agents quickly gather information without writing full command execution plans.

Example workflow

You: “Fix the failing test” Agent: !npm test (sees which test is failing) Agent: “I see user.test.ts is failing. Let me check that file…” Agent: (reads file, identifies issue, fixes it) Agent: !npm test (verifies fix worked) Bash commands speed up the agent’s exploration and verification steps.

Writing effective prompts

Pattern 1: The explorer

When you don’t know where something is:
Find all places where we make API calls to the /users endpoint.
List the files and what each call does.
Agent searches the codebase and summarizes findings.

Pattern 2: The planner

For complex changes:
I want to add a "remember me" checkbox to the login form.
First, create a plan showing:
1. What files need to change
2. How you'll store the preference
3. How you'll extend the session duration

Show me the plan before implementing.
Agent creates a plan. You review it. Then you say “looks good, implement it” or suggest changes.

Pattern 3: The validator

When you want to check quality:
Review the changes I just made to auth.ts.
Check for:
- Security issues (XSS, injection, etc.)
- Edge cases I might have missed
- Opportunities to simplify the code

Don't make changes, just give feedback.
Agent acts as a code reviewer.

Pattern 4: The documenter

For keeping docs updated:
I just added a new environment variable `MAX_UPLOAD_SIZE`.
Find where environment variables are documented (probably README or .env.example)
and add documentation for this one, following the existing format.
Agent finds the doc file, adds the entry, matches style.

Pattern 5: The investigator

For debugging:
The app crashes when clicking "Export CSV".
Check the browser console logs I pasted below, find the source of the error,
and fix it. Test that the fix works.

[paste error logs]
Agent reads logs, identifies cause, fixes issue, verifies.

Best practices

Do’s

Be specific Name files, functions, or areas of code Reference examples “Follow the pattern in auth.ts” Request verification “Run tests after making changes” Ask for plans “Show me your approach before coding” Iterate “That’s close, but use async/await instead of promises” Use AGENTS.md It auto-loads your conventions into agent context

Don’ts

Don’t be vague “Make the app better” is too broad Don’t skip review Always check agent code before merging Don’t forget tests Ask agents to run tests after changes Don’t ignore errors If the agent makes a mistake, give clear feedback

When agents go off track

Sometimes agents misunderstand or take the wrong approach.

How to redirect

Stop early: If you see the plan is wrong, say:
Wait, that's not what I meant. I want to update the existing modal,
not create a new component.
Give examples: If output doesn’t match style:
The formatting isn't right. Look at how we format errors in error-handler.ts
and match that style.
Ask to revert: If changes are wrong:
That broke things. Revert those changes and let's try a different approach.

Prevention: Request plans

For non-trivial tasks, ask for a plan first:
Create a plan for adding pagination to the user list. Show me the plan
before writing code.
Review the plan. Correct it if needed. Then: “Looks good, implement it.”

Parallel workflows: Multiple agents

Each Ona Environment is isolated, so you can run multiple agents simultaneously on different tasks.

Why this matters

Three tasks that take 30, 10, and 15 minutes sequentially (55 minutes total) take only ~30 minutes in parallel, since each agent works in its own environment. This applies to team workflows too: one developer builds a feature while another reviews a PR, each in their own environment. Or you spin up three environments to test frontend, backend, and integration changes independently.

Try it: Run two agents in parallel

  1. Keep your current environment open
  2. Start a second environment (new tab, go to app.ona.com and create/launch another environment for your repo)
  3. In environment 1, give the agent a task:
    Add JSDoc comments to all exported functions in backend/server.js
    and backend/data-store.js. Document parameters, return values,
    and any error responses.
    
  4. In environment 2, give a different task:
    Look at all the API endpoints in backend/server.js and create a
    document listing each endpoint, its HTTP method, expected inputs,
    and possible error responses.
    
Both agents work at once. Switch between tabs to check progress. Review results when done. Use the left side panel in the Ona dashboard to see all active environments, connect to any of them, or stop ones you no longer need.

Troubleshooting

Agent seems confused
  • Be more specific: Name exact files or functions
  • Provide context: “This is a React app using hooks, not class components”
  • Reference AGENTS.md: “Follow the conventions in AGENTS.md”
Agent made incorrect changes
  • Give feedback: “That’s not right because…”
  • Ask it to revert: “Undo those changes”
  • Try a different prompt: “Let me rephrase…”
Agent won’t run tests
  • Explicitly ask: “Run npm test and show me the output”
  • Check if tests exist: !ls **/*.test.*
  • Verify test command: !cat package.json | grep test
Agent is too slow
  • Break into smaller tasks: Instead of “refactor everything,” do “refactor auth.ts first”
  • Use bash commands for quick checks: !git diff instead of asking agent to summarize
Want to undo everything
  • If no commits yet: !git checkout . (revert all changes)
  • If committed: !git reset --hard HEAD~1 (undo last commit)
  • Start fresh: Delete environment, create new one

What you’ve learned

You now know how to:
  • Direct agents effectively with specific, actionable prompts
  • Work with agents on real development tasks (error handling, tests, refactoring)
  • Use bash commands for quick context gathering
  • Write prompts that get good results (be specific, reference patterns, request verification)
  • Redirect agents when they go off track
  • Run multiple agents in parallel to increase throughput
Agents are powerful teammates when you give them clear direction and verify their work.
Next: Lab 4: Team Essentials