What should my engineering org look like in 12 months? That's the question underneath every AI tool evaluation, every POC, every seat license debate. And it makes sense: the first step to provisioning a team is to understand what their day-to-day looks like and what they need. We can’t buy tools without knowing what shape an org will be.
Our CEO Matt wrote about why this age of AI is a transformation problem, not a tooling problem. Our VP Eng Vinay wrote about what breaks when you move fast without solving for confidence. This post is the map. Four stages of transformation, what changes at each one, and what the operating model looks like when you get there.
Four stages of AI transformation: orchestration is the goal
We see organizations move through the AI-native transformation in four stages. Most are in stage one. The ones pulling ahead have reached stage three. Almost no one is at stage 4, full orchestration, yet.

The transformation path: from agent adoption to agent orchestration. Most organizations (~70%) are still in stage one — integrating agents into daily workflows. Fewer than 10% have agents owning work autonomously, and almost none have reached full agentic orchestration.
| Stage | What it looks like |
|---|---|
| 1. Adopt agents | Developers use AI coding tools in their daily workflow. POCs running, seat licenses debated. |
| 2. Shift to AI-native | Agents have access to build, test, deploy, and diagnose CI/CD failures on their own. |
| 3. Expand scope | Agents own code review, smaller tasks end-to-end, incident response. Point automation exists, but the system is not fully integrated. |
| 4. Orchestrate | Layers of agentic oversight amplify every human action. One decision cascades through a hierarchy of agents. |
What stage 4 looks like in practice
Orchestration is the goal, but what does it look like day to day, and how do engineering tasks shift? The software development lifecycle has six phases that every engineering org recognizes: ideation, code generation, code review, validation, build/test failures, and incident response.
In three of those six, agents are now the primary driver at Augment. Here's what each one looks like now.

The AIDLC as a continuous loop.
Ideation is still human-led. A product owner or engineer defines the intent: what gets built and why. Agents assist with research, context gathering, and spec drafting. The spec is the product at this stage. As Zizzy wrote in Specs are infrastructure in the age of agents, specs stop being documents that describe the system and become ones that actively govern how it's built. When agents are responsible for implementation, they constantly encounter ambiguous decisions. A well-written spec gives them something to anchor against. For smaller tasks, agents pick work up autonomously from Slack, Linear, or a ticket queue without a human assigning it.

Ideation and intent are human-led.
Code generation is fully agent-driven. Humans should not be writing code. That's the mindset shift. For larger features, a human iterates on a spec and agents do the implementation. For smaller tasks, an agent owns the entire flow end-to-end: writing the code, testing it, documenting it, responding to review feedback, fixing merge conflicts. The PR lands ready to merge without a human touching a line.

Agents write all code.
Code review is where the traditional bottleneck lives. Agents do an initial bug scan and triage PRs by risk level. Low-risk changes get auto-approved. Higher-risk changes get a collaborative review where the agent surfaces key risks and architectural decisions so the human reviewer focuses on what matters instead of reading every line.

The traditional bottleneck, restructured.
One enterprise customer told us their developers were spending 30% of their week blocked on code review. Within three weeks of agent-driven triage, that dropped to under 10%. But as Vinay wrote, velocity without understanding is its own problem. The review process has to rebuild system understanding, not just check for bugs.
Validation doesn't need a developer's machine. An agent deploys the PR to a staging environment, exercises the feature through browser automation, and validates it works. The agent maintains and evolves the test plans over time.

Fully agent-driven.
The bar isn't "does it compile" or "do the unit tests pass." It's "does it do the right thing, end to end." Most teams don't have the infrastructure to clear that bar today.
Build failures get diagnosed by agents first. Easy cases are fixed automatically. Harder cases trigger a collaborative session between an agent and a human. The knowledge from every session, autonomous or collaborative, gets distilled and persisted. The next time a similar failure happens, the agent handles it without human involvement.

Every failure makes the system smarter.
Incident response is the least mature phase. Instead of paging a human on-call engineer at 2 AM, a coordinated set of agents (triager, investigator, PR author, Slack coordinator, SRE) works the incident, orchestrated by an Incident Coordinator agent. Humans are available as a resource, but they're not driving. The knowledge from every incident compounds for the next one.

The least mature phase, and the most ambitious
The coordination works in controlled conditions. Real incidents are messy, and the handoff between agent investigation and human judgment isn't smooth yet. We're still iterating on this one.
The mindset shift is bigger than the workflow change
When we change our SDLC to an AIDLC, it’s visible. We can see the diagrams move. The harder change is changing how everyone in the org thinks about their jobs.
Engineers stop thinking of themselves as people who write code and start thinking of themselves as people who define intent and steer agents toward the right outcome. Vinay wrote about what this means for system understanding: when agents write the code, engineers lose the intuition they used to build by writing it themselves. The new skill is maintaining that understanding through specs and reviews rather than through implementation.
We're hiring for this now. The team wrote about the six dimensions we screen for: product taste, architectural judgment, agent leverage, communication, ownership, and learning velocity. Raw coding ability isn't a standalone dimension anymore.
It’s not just a tooling question.
The common thread: this isn't a tooling change that leaves the org intact. The roles change, the skills change, and the way you evaluate engineers changes. Every day we sit across from a VP of Engineering who has a spreadsheet comparing AI coding tools on features and pricing. And every day we tell them the same thing: you're thinking about the wrong problem.
We’re in this with you and working on this every day. Our own engineering org is learning how to build differently in real time.
We're also going through this alongside customers. The transformation has to be tailored to each org's processes, internal tooling, and existing workflows. A 200-person engineering org hits different friction than a 2,000-person one. There's no one-size playbook.
You also can’t fill this gap with unlimited tokens. Engineering has always been about more than writing the code. Reorganizing your teams, changing your processes, building the agent infrastructure to make it all work while simultaneously shipping product? That's on you. Or you find someone who's already in the middle of it.
That's the conversation we think engineering leaders should be having. Not "which tool" but "who's going through this transformation with us."
Written by

Igor Ostrovsky
CTO and Co-founder
Igor dove head-first into generative AI in 2021 as Sutter Hill Ventures’ Engineer in Residence, leading to the founding of Augment. Before this, as a Chief Architect at Pure Storage, he led the technical development of FlashBlade to $2B lifetime sales. His earlier experiences included a 6-year tenure at Microsoft and reaching the ACM ICPC World Finals in 2007.

Anshuman Pandey
GTM
Anshuman Pandey is an Enterprise Account Executive at Augment Code, where he focuses on enterprise customer strategy and GTM execution. A developer turned sales leader, he brings a decade of experience spanning full-stack software development, solutions architecture, and strategic accounts at companies like Palo Alto Networks and State Farm.

John Edstrom
Director of Engineering
John is a seasoned engineering leader currently redefining how engineering teams work with complex codebases in his role as Director of Engineering at Augment Code. With deep expertise in scaling developer tools and infrastructure, John previously held leadership roles at Patreon, Instagram, and Facebook, where he focused on developer productivity and platform engineering. He holds advanced degrees in computer science and brings a blend of technical leadership and product vision to his writing and work in the engineering community.
