How do you hire engineers when agents write 99% of the code?
We found ourselves asking that question after realizing our hiring process was built for a different world. Software engineers have always done more than just write code, but the ability to write code very well was the first pre-requisite to getting hired.
But, as agents get better at implementation, the engineers who create the most leverage are the ones with product taste, architectural judgment, and the ability to direct systems of humans and agents toward the right outcome.
At Augment, we've been writing openly about going AI-native and what it actually takes. This is the next chapter of that story.
The skills that matter now
As engineers begin working alongside increasingly capable AI agents, the nature of the job is shifting. Less time goes into writing code itself, and more time goes into deciding what should be built, designing systems that will hold up in production, orchestrating agents, and aligning teams around clear outcomes.
Coding still matters. But increasingly, it’s the part machines can help with.
More important now is judgment: the ability to choose the right problems, make sound architectural decisions, and direct both humans and agents toward meaningful outcomes.
If you zoom out, the shift looks something like this:
| Traditional Engineering | AI-Native Engineering |
|---|---|
| Writing code | Specifying intent and evaluating tradeoffs |
| Implementing solutions | Orchestrating agents |
| Solving problems | Choosing the right problems |
| Individual output | System-level outcomes |
Recognizing that the craft of engineering is moving up the abstraction stack led us to a simple question:
In an AI-native environment, what capabilities actually separate exceptional engineers from good ones?
6 dimensions of AI-native engineering
It’s helpful to ground in Augment’s definition of AI-native engineering:
The human role is shifting from author to architect and editor. You're defining the intent, making design and trade-off decisions, setting guardrails, obsessing over user experience, and being the last line of quality.
With that in mind, we took half a day with a cross functional team of EM’s, IC’s and recruiters to go deep on this topic and think about this from first principles. From this discussion, six capabilities consistently surfaced. These are the dimensions we believe will matter most as engineering becomes increasingly AI-native.
| Dimension | Core Question |
|---|---|
| Product & Outcome Taste | Are we building the right thing? |
| System & Architectural Judgment | Will this survive production? |
| Agent Leverage | Can you turn AI into real engineering throughput? |
| Communication & Collaboration | Can you communicate intent clearly and collaborate across perspectives? |
| Ownership & Leadership | Do you drive outcomes, not just tasks? |
| Learning Velocity & Experimental Mindset | Can you evolve as fast as the tools? |
One thing you may notice missing from this list: raw coding ability as a standalone dimension. Coding still matters, but it’s no longer the primary differentiator of engineering talent.
Product & outcome taste
Are we building the right thing?
As code becomes cheaper to produce, the most expensive mistake is building the wrong thing. Engineers increasingly need to investigate user problems, cut through ambiguity, and define clear outcomes before implementation begins.
The highest-leverage engineer isn’t the one who writes the most code. It’s the one who ensures the team is solving the right problem.
System & architectural judgment
Will this survive production?
Agents can generate working code, but they are far less reliable at judging whether the system around it is sound. Architectural judgment still requires understanding long-term tradeoffs, operational realities, and the hidden risks that emerge at scale.
“It works” is easy. “It will keep working in production” is much harder.
Agent leverage
Can you turn AI into real engineering throughput?
AI-native engineers don’t simply use agents for assistance. They structure problems so agents can execute effectively, guide them when they drift, and validate the results they produce.
Think of it as delegation — except your reports are incredibly fast and occasionally confidently wrong.
Communication & collaboration
Can you communicate intent clearly and collaborate across perspectives?
As implementation speeds up, more of the work shifts toward clarifying problems, surfacing tradeoffs, and incorporating input from different parts of the team. Engineers increasingly need to communicate clearly, listen well, and build shared understanding quickly.
The fastest teams aren’t the ones that code the fastest — they’re the ones that reach clarity the fastest.
Ownership & leadership
Do you drive outcomes, not just tasks?
Great engineers own outcomes end-to-end, not just their piece of the code. When something blocks progress — whether it’s slow builds, unclear workflows, or gaps between systems — they step in and fix it, even if it’s outside their immediate scope.
Ownership means removing whatever stands between the team and the outcome.
Learning velocity & experimental mindset
Can you evolve as fast as the tools?
The tools we use today will not be the ones we use three months from now. Engineers who thrive here experiment constantly, change their workflows quickly, and drop old approaches when better ones show up.
Experimentation isn't a phase. It's the job now.
From ideals to criteria
A framework only matters if it changes how you hire. The next step was translating these dimensions into observable signals — behaviors we can evaluate during interviews.
For example:
- Can the candidate quickly clarify an ambiguous problem?
- Do they recognize architectural risks before they appear in production?
- Can they effectively direct and validate AI-generated work?
We focused on engineering roles first, where the shift to AI-native workflows is already the most obvious. And we’ll keep moving through other disciplines to narrow on their implementation of these criteria.
The profiles we look for now
We identified four profiles that will anchor our hiring in the near term.
AI-Native Systems Engineer
Strong architectural judgment and deep infrastructure instincts. Responsible for keeping the foundations sound as agents build faster on top of them.
AI-Native Product Engineer
Strong product taste and user empathy. Focused on defining the right problems and iterating toward outcomes that matter.
AI-Native Applied AI Engineer
Deep understanding of models and how to build effectively on top of them. Responsible for improving the capabilities of our agents and workflows.
AI-Native Early Professional
Learning velocity above all else. Engineers who are growing up agent-first and adapt quickly as the tools and workflows change.
Each profile weights the six dimensions differently, and each now has an interview loop built around the signals that matter most for that role.
Hiring reflects your values
One useful side effect of rethinking hiring was that it forced us to make our engineering values explicit.
These six dimensions aren't just shaping recruiting. They're also influencing how we think about performance, growth, and career development. If judgment, leverage, and learning velocity matter most, those capabilities should show up everywhere, not just in interviews.
What comes next
We're sharing this early because we expect the framework to change. The tools are changing quickly, and our view of what great AI-native engineering looks like is changing with them.
If you're excited by a future where small teams of engineers work alongside large teams of agents, and where the craft centers on product taste, systems judgment, and orchestration, we'd love to hear from you.
If you're an engineering leader wrestling with the same questions, we’d be interested in how you're thinking about them.
No one has this fully figured out yet. But hiring is already changing, whether interview loops have caught up or not.
This is part of an ongoing series about Augment's AI-native transformation. Previous posts: We're in an exponential and The hardest part about going AI-native isn't just technical.
Written by

Alex Ding
Senior Engineering Manager
Alex Ding is Senior Engineering Manager at Augment Code, where he leads development of AI-powered developer tools. He brings over a decade of engineering leadership experience, most recently as Manager of Platform Engineering at C3 AI, where he led high-performing teams building enterprise-scale systems. He’s particularly interested in how AI is transforming software engineering, and how teams can reinvent both their tooling and their ways of working to navigate this generational shift.

Alyah Sablan
Recruiting Operations
Alyah Sablan leads Recruiting Operations at Augment Code, building the systems and tools behind how the company hires and grows its team. Before joining Augment, she oversaw recruiting operations for industry hiring at Duolingo.

Chris Marty
Head of People
Chris Marty is Head of People at Augment Code, where he builds teams at the intersection of AI and software engineering. Earlier in his career he held roles at Wealthfront, Fleetsmith, Dropbox, and Google, and spent several years working with early-stage founders at Unusual Ventures. He lives in San Francisco and is always happy to talk talent, early-stage teams, or pinot noir.

Vinay Perneti
VP of Engineering
As VP of Engineering, Vinay supports product, research, and engineering teams building AI agents that truly understand large, complex codebases. Before joining Augment, Vinay led product and platform organizations at Meta and Pure Storage. He’s drawn to problems that live at the intersection of technology and people, like how teams evolve, how AI reshapes the craft of software engineering, and what it takes to build things that evoke delight.
