Skip to content
Install
Back to Blog

Specs are infrastructure in the age of agents

Mar 13, 2026
Zizhuang (Zizzy) Yang
Zizhuang (Zizzy) Yang
Specs are infrastructure in the age of agents

In traditional development, specs were mostly planning documents. Once implementation started, they quickly became stale as the reality of the system emerged during coding.

In AI-native workflows, the relationship flips. Specs become living contracts that guide implementation, and agents continuously refer back to them as they build.

The traditional RFC process existed for a simple reason: exploring implementation was expensive, so teams had to front-load design into documents before writing code.

That constraint no longer holds. Agents collapse the gap between design and implementation, letting them inform each other in real time. Instead of disposable design documents, specs become durable infrastructure for guiding and evaluating implementation.

Engineers define the intent and constraints in the spec, agents explore the implementation space, and the system continuously refers back to the spec as the source of truth.

The gap in every design process

Even in organizations with rigorous RFC processes, design and implementation are still sequential: you write, review, approve, then build. The design remains theoretical until code exists, and by the time someone discovers that an API boundary doesn’t work or a data model doesn’t scale, changing course is expensive.

This gap hits less experienced engineers hardest. Writing a strong RFC requires anticipating implementation pitfalls you’ve never encountered, which is a skill that comes from experience, not from thinking harder. The engineers who would benefit most from early implementation feedback are often the least equipped to predict what that feedback would reveal.

AI tools help, but only partially. Engineers can now explore edge cases and stress-test assumptions with an agent while writing an RFC. The resulting specs are often better, but the fundamental workflow is unchanged: design still happens first, implementation later.

The AI-native workflow

In a truly AI-native environment, design and implementation stop being sequential. The implementer can draft a spec while simultaneously using agents — like Intent’s Coordinator agent — to explore, prototype (sometimes multiple designs in parallel), or even deploy something to staging. The spec and the implementation evolve together.

In an AI-native workflow, instead of discovering that an interface doesn’t work three weeks into implementation, you discover it while the spec is still open for review.

Reviewers are no longer evaluating a purely theoretical design, they’re evaluating one that’s already been partially pressure-tested against reality. The deeper shift isn’t just speed. It’s that the spec becomes operational infrastructure. It stops being a document that describes the system and becomes one that actively governs how it’s built.

When agents are responsible for large portions of the implementation work, they constantly encounter ambiguous decisions:

  • Should we expand scope to support X?
  • Is it acceptable to bypass this interface?
  • Should we introduce another abstraction here?
  • Does this implementation match the intended architecture?

Without a specification, the agent has two options: ask a human who may not have full context, or guess. Guessing is where agents quietly make scope and boundary decisions that nobody authorized.

A well-written spec gives the agent something to anchor against. When questions arise, the agent can consult the specification to stay within the intended architecture.

In traditional engineering, specs guide humans.
In AI-native engineering, they also guide the agents writing the code.

That’s a higher bar: Writing specs that guide agents requires more clarity, not less. Accountability stays with the humans who write and approve the spec, not the agent that follows it.

Post image

A concrete example

We saw this pattern play out recently at Augment while building a user groups system for enterprise tenants. The feature required new database schemas, API definitions, a settings resolution model, and a phased rollout plan, so the engineer started by writing a comprehensive spec covering the full system.

Two days later, the first implementation PR landed with service scaffolding and CRUD endpoints. That same PR also updated the spec. While building the stubs, the engineer realized the original phasing plan introduced a dependency that would unnecessarily block the feature’s rollout. The implementation plan in the spec was adjusted and the code change shipped together. Because the spec lived alongside the code, keeping it up to date became part of the implementation work.

A couple of days later, a second PR landed with the full database layer working against a real emulator, with nine methods tested end-to-end. By the time reviewers were discussing the spec’s design decisions in review, the core implementation was already validated and running — something that traditionally wouldn’t have happened until weeks later.

Reviewers weren’t evaluating a theoretical architecture. They were evaluating one that had already been pressure-tested against a real database.

As the project progressed, the spec continued to guide the work. Subsequent PRs followed its phased plan, and when a later refactor changed the data access pattern, the architectural boundaries defined in the spec made it a straightforward swap with no logic changes.

The spec wasn’t an artifact that got filed away after approval. It was the contract that kept the implementation on track.

Where this breaks down

When the spec is wrong, the agent will faithfully implement the wrong thing faster and more confidently than a human would. When the spec is too vague, agents fill gaps with confident-sounding decisions nobody explicitly authorized.

And when concurrent prototyping goes unchecked, teams can prematurely commit to an implementation approach simply because working code is psychologically harder to discard than a paragraph in a document.

The spec raises the floor. It doesn’t eliminate the need for experienced review.

Senior engineers are still critical. The difference is that their review becomes more productive, because they’re evaluating a design that has already been partially validated against reality instead of a purely theoretical proposal.

Conclusion

The spec is the contract that keeps concurrent design and implementation from turning into chaos, gives agents architectural guardrails, and gives reviewers something grounded in reality to evaluate.

A junior engineer with a spec and an agent can now build within architectural guardrails that once required years of experience to anticipate. Not because senior engineers become unnecessary, but because the review itself is sharper when there's working code on the table.

The question for every engineering org isn't whether this workflow is coming. It's whether your specs are written to be consumed by agents, not just humans.

Written by

Zizhuang (Zizzy) Yang

Zizhuang (Zizzy) Yang

Member of Technical Staff

As a member of Augment’s technical staff, Zizzy builds tools to help engineers collaborate with agents to design, prototype, and ship software more effectively. Before joining Augment, Zizzy was a principal software engineer at Riot Games and also spent more than a decade shipping products, product infrastructure, and org-wide migrations at Meta.

Get Started

Give your codebase the agents it deserves

Install Augment to get started. Works with codebases of any size, from side projects to enterprise monorepos.