|
November 2025
The strongest teams now treat humans and AI as part of one engineering system: shared standards, shared environments, shared pipelines. When hiring loops, SDLC, and agent workflows all run on different rules, you get scattered experiments instead of dependable output.
This edition looks at what happens when you bring that work onto a single, structured foundation: from how you select and support engineers, how you let AI write and change code, and the best way to evaluate agents in real terminals.
|
TL;DR
| • |
Hiring is system design: Dan Luu’s ‘hiring lemons’ piece shows how weak eval loops filter out strong engineers and keep weak ones.
|
| • |
AI coding inside the SDLC: Axuall’s 18-month report shows AI works reliably in production when every change passes standard review and test gates.
|
| • |
Agents measured in real terminals: Terminal-Bench 2.0 provides structured, repeatable tasks to evaluate agents on actual terminal workflows.
|
| • |
Ona migrations: Our rules-based migrations to agents story outlines how we turn cross-repo changes into a repeatable, auditable migration workflow.
|
| • |
AWS re:Invent: Meet us in Las Vegas, Dec 1–5, at Booth 632 to walk through live Ona use cases like Java/.NET migrations and policy-driven updates inside your perimeter.
|
|
Luu applies the classic lemons model from economics to hiring and promotions and shows how weak observation leads to weak decisions. He describes how organizations lean on proxies like brand names, narrow checklists, and puzzle interviews when they cannot see actual impact, and how those choices push strong engineers out while keeping weaker ones in place. For AI-heavy teams, he effectively treats hiring as another feedback system: if the loop is noisy, the organization misjudges both engineers and agents and slows everything built on top of them.
This piece from Axuall documents 18 months of using AI coding tools as a standard part of engineering, not as a side experiment. The team scopes work tightly, routes AI changes through pull requests, and enforces the same review and test criteria on AI-generated code as on human-written code. They report that AI shines on functions, tests, and refactors and that disciplined review consistently catches overconfident or incorrect suggestions. The post gives engineering leaders a working pattern: integrate AI directly into an existing SDLC and let the process act as the safety system.
Seconds0 maps the next stage of agentic coding to three major shifts: automated context pipelines instead of manual copy and paste, agents that plan before they act, and workflows that treat agents like long-running software components. The piece leans on specific patterns such as structured task graphs, explicit state handling, and simple rollback strategies. The result is a practical bias toward better task decomposition, clearer environment contracts, and predictable logs, rather than one more layer of prompt tuning.
This whitepaper from Google and Kaggle defines an agent stack in clear layers and uses that structure to guide design decisions. It separates the model, tools, orchestration, and runtime environment, then walks through patterns for each: how to attach tools, how to coordinate multi-step work, and how to evaluate behavior over time. This guide helps teams treat agents as long-lived systems with lifecycles and responsibilities, not as one-off scripts.
Terminal-Bench 2.0 and Harbor move agent evaluation toward real development work instead of synthetic puzzles. Terminal-Bench defines a set of terminal tasks that look like day-to-day maintenance and debugging. This encourages engineering and platform teams to treat agent behavior as something to test with clear pass and fail criteria. That shift supports decisions about where agents belong in production rather than relying on isolated demos.
If you’re attending re:Invent next month, we’d love to meet you in person. Book time with our team to walk through concrete Ona use cases like Java and .NET migrations, cross-repository refactors, and policy-driven updates that run inside your perimeter. Use this slot as a working session with us if you need to make near-term decisions on agent platforms or how to move forward with your AI SDLC initiatives.
Stop by Booth 632 on the main floor to meet the team and see Ona’s newest capabilities in action.
In this story we describe how our customers move from manual migrations, to rules-based tools, and then to agent systems that run in full development environments. We call out where recipe-style tools work well and where they stall on complex, multi-language estates and proprietary APIs. Our goal is to turn migrations into a repeatable workflow that uses a single playbook to plan, roll out, validate, and roll back changes across many repositories.
Join our live session with Platform Engineering on Nov 20th where we dig into how enterprises move from consultant-heavy, spreadsheet-driven migrations to agents running in real development environments. We focus on concrete patterns for scoping, governance, and rollout so platform teams can turn once-a-decade projects into repeatable, auditable migration systems.
In this conversation our CTO Chris Weichel shares how we convert agent autonomy into predictable outcomes. He walks through the core primitives of Ona: isolated environments, narrow permissions, and structured checkpoints when agents propose changes across a large codebase. We also talk about how parallel agents expose new bottlenecks in code review and how we respond with guardrails and observability so platform and security teams can stay in control while throughput goes up.
In this guide, we treat background agents as a pattern for long-running and recurring work and we then spell out what it takes to operate that pattern safely. We outline ways to structure continuous remediation, scheduled migrations, and other cross-repo work so every agent has a clear scope, owner, and audit trail. This is great for understanding specific levers such as isolation strategies, logging expectations, and handoff points for agent + human collaboration.
Ona now offers Enterprise Runner support in the AWS Mumbai region. This allows teams in India and nearby markets to keep code and data local while keeping their overall architecture consistent.
AWS re:Invent – Las Vegas, Dec 1–5, 2025
May your builds stay swift and your agents never drift,
Your friends at Ona
|