Install
Back to Blog

The hardest part about going AI-native isn't just technical

Feb 27, 2026
Vinay Perneti
Vinay Perneti
The hardest part about going AI-native isn't just technical

In my last post, I talked about how we're in an exponential. We can already see the craft of software engineering changing — agents writing the majority of code for senior engineers, entire apps shipped in weeks. It's just that this change is not evenly distributed yet.

The most important strategy when you're in an exponential is this: positioning matters more than perfection. Put agents at the center of how work gets done, so that when agents get better — and they will, fast — you ride the wave instead of watching it pass. That's exactly what we did going into this quarter. Augment is going fully AI-native, agent-first, even in areas where agents aren't perfect today.

But wait, wasn’t Augment already AI-native?

As we were embarking on this journey, it became clear to me that different folks had different definitions of what "AI-native" actually means.

As I promised in my last post, we plan to share through a series of posts on the good, the bad, and the messy what this transformation is actually like for us at Augment, so we can navigate this journey together. This post is the first one.

Not all "AI-native" is the same

I started talking to engineers across the team and with leaders in AI-forward companies, and it became clear pretty quickly that "AI-native" meant wildly different things to different people.

For some, it meant having a chat window alongside their codebase. You ask the AI a question, it helps you write a function, you move on. That's useful. But that's AI-assisted, not AI-native.

For others, a growing group, it meant something completely different. Running multiple agents in parallel. Spending most of their time on intent, design, and quality — not writing code.

If half your team thinks AI-native means "I use an agent in the IDE" and the other half thinks it means "agents handle most of the execution while I architect and steer," you're not aligned. The problems that need solving in these two worlds are wildly different. One group is clearly set to ride the exponential, while the other group is not.

Our first step was to clearly define what AI-native means for us. We collectively landed on a definition that’s aligned with the exponential:

The human role is shifting from author to architect and editor. You're defining the intent, making design and trade-off decisions, setting guardrails, obsessing over user experience, and being the last line of quality.

That shift needs both the tooling and the people to change. You need agent productivity infrastructure, spec-first workflows, rethought code review. But the tooling doesn't land if the team is still operating in the old paradigm. You can't do one without the other.

Everyone's talking about the tooling. Nobody's talking about the people.

I had to step back and think about this. Everyone writes about AI-native as a technology problem: better models, better tools, better infrastructure. We’re building all of that. But technology alone doesn’t determine whether a transformation succeeds or quietly dies. There’s an equally hard human side.

We realized quickly that becoming AI-native isn’t just about changing tools or workflows. It’s about helping a team of talented engineers evolve their craft — and how they perceive the way they add value. That’s not a small shift. So we decided to treat this as a real transition with multiple facets, not something you can announce and expect to happen automatically.

For context, we’re an AI developer tools company. We build the tooling. We just launched Intent in early preview, an agent-forward way of building. And even we had to stop and reckon with the fact that the hardest part wasn’t the technology. It was us.

Slow down to speed up, in the right direction

Everyone wants to keep moving fast. There's never enough time to slow down. But you can't change direction when you're sprinting. You have to slow down first. So we took two full days out of an incredibly busy quarter — not to ship features, not to plan roadmaps — to create space and to think deeply about how we work. To orient our mental models to where we are headed.

Two days from everyone in Eng, Product, Design, and Research is a big investment of time, so I wanted to be super intentional about creating the right environment for the shift to happen. We focused on three goals:

Engage the builder muscle. Day 1 was a hackathon. Three themes: 10x your agent, 10x your team, 10x yourself. One hard constraint: everything must be built agent-first. Don't write the code yourself. Give the agent harder problems than you've ever given it. See where it breaks and where it surprises you.

The hackathon showed, firsthand, what's possible and what's not yet possible. A lot of engineers were already living in this future. It also surfaced the real gaps where agents clearly needed more infrastructure. More on what came out of the hackathon in a later post.

Engage the emotions. We opened Day 2 with something unexpected — an anonymous poll. More on that in a moment, because what came back set the tone for the entire day.

Harness the intelligence of the whole group. The rest of the day was bottom-up ideation with just the right amount of structure and minimal presentations. We used 1-4-All, a variant from Liberating Structures: everyone thinks alone first, then in groups of four, then as a whole room. It's deceptively simple, powerful tool for drawing out ideas, providing psychological safety, and maximizing inclusion. The skeptic who stays quiet in a big group has already articulated their view twice before the room-level conversation even starts.

For the breakouts, we spent a bunch of time ideating on what the best prompts should be for the teams to go deep on. We framed two questions:

  1. Tooling question: What’s needed to make agents more productive?
  2. People question: What new behaviors do we need to adopt as a team?

What it really feels like to be in an exponential

Before diving into those breakouts, we asked one question: "What feelings are you sitting with right now?"

The resulting word cloud revealed the paradoxical nature of this change we are all heading towards.

The biggest words: frustration. uncertainty. Right alongside them: curious. excited. hopeful. agency.

Sit with that for a moment. This is what an exponential actually feels like from the inside. Not the clean "we're so excited about AI" narrative you read in every company blog post. The real, messy emotional landscape of a team navigating a fundamental shift.

I think every engineering team going through this transition has a word cloud like this — most of them haven't created the space to surface it. And if you don't surface it, it doesn't go away. It goes underground. Passive resistance. Quiet disengagement. People going through the motions while privately wondering if their skills still matter.

That last part — that's the thing nobody talks about publicly.

Experienced engineers have spent years building deep expertise in writing code. That's their identity. That's where their professional pride lives. And now someone is telling them that the thing they've spent their career getting great at is being automated. Even if you frame it as "elevation," it can feel like a demotion. One breakout group named it directly: the shift from "proud builder" to "proud coordinator." Another raised the fear of skill atrophy — if you stop writing code daily, you lose the deep understanding required to step in when things break.

These aren't irrational fears. They're real. And you have to name them out loud, because glossing over them is how you lose your best people.

But what I'm actually seeing is more nuanced than the fear suggests. And more optimistic.

Becoming AI-native shouldn’t mean becoming shallow. If anything, it requires a deeper level of engineering craft, just applied differently. The goal isn’t to write less thoughtful code. It’s to apply that same rigor at the system level: in specs, architecture, evaluation, and review. If anything, the bar for engineering judgment is going up, not down.

Senior engineers have all the scars. They know which migration broke production. They know the data access library the team banned because it silently swallowed errors. That pattern recognition, that judgment, that instinct for "this looks right but it's going to break in production" — that's exactly what orchestration requires. Their expertise is more valuable now, not less. It's just applied differently.

Junior engineers come in without the old muscle memory — and that turns out to be a superpower. They throw agents at problems that veterans would solve manually. One breakout group's recommendation: "once a day, try the agent on a task you think is a moonshot." That adventurousness comes more naturally when you haven't already built 15 years of manual habits.

Both cohorts need each other. That's the new team dynamic.

The word cloud didn't show a team in crisis. It showed a team being honest about navigating genuine ambiguity. And that honesty is the prerequisite for everything that came next.

What surfaced from the breakouts

We channeled those feelings into two structured breakout sessions. Seven groups, working independently, arrived at a remarkably consistent set of insights. These weren't talking points from leadership. This is what the team came up with on their own when given real space and the right structure.

Making agents more productive

Verification is the central problem. Every single group named this. Agents can write code — that's not the bottleneck anymore. The bottleneck is they can't verify that their code actually works. No CI feedback, no test results, no way to spin up an environment or click through a UI across different surfaces.

Agents also lack human context. At Augment, we think about context a lot. Augment’s context engine is best in class, but it can only surface what’s already captured and recorded. Agents have no awareness of the overall system that nobody has written down — architecture decisions, tradeoffs, conventions nobody wrote down. The result: "locally correct code that's globally incoherent." So much critical knowledge lives in people's heads. Until you encode it in a form agents can consume, they'll keep making the same mistakes any new hire would — except faster and more confidently.

Changing how we work

Code review is the new bottleneck — and it's getting worse. The constraint has shifted from writing code to verifying it. Writers of agent-generated PRs are offloading the cognitive burden onto reviewers. And the volume makes it worse: you get a 1000-line PR, you leave zero comments. You get a 40-line PR, you leave five. Agent-generated PRs tend toward the former.

Spec clarity is now the highest-leverage human activity. If the spec is wrong, agents will confidently build the wrong thing faster than ever. Multiple groups independently arrived at the same conclusion: spec reviews may be more critical than code reviews. One team reframed the whole workflow: "Ticket to PR is the wrong aspiration. It should be ticket to spec, spec to PR."

Agents are amplifiers — of both good practices and bad ones. Good specs and clean codebases produce great agent output. Vague specs and messy codebases produce more mess, faster. Speed without quality is just faster failure. One entry from the post-offsite survey stuck with me: "don't ship slop." The team wasn't asking to slow down. They were asking for the conditions to move fast well.

What we're building from here

Coming out of those two days, a set of high-leverage moves became clear — not because leadership decreed them, but because the team converged on them.

We stood up a dedicated agent productivity team focused on giving agents the same access to tools and infrastructure that humans have. Verification loops, end-to-end testing, build feedback. The ceiling isn't intelligence — it's access.

We're going spec-first. Treating specifications as infrastructure rather than afterthought documentation, and using our own product, Intent, to make spec-driven workflows natural.

Our code review team pivoted to solving our own code review bottleneck first with agents, so we can solve it for our customers.

And we're investing in agent autonomy through context — AGENTS.md and skills — to give agents the tribal knowledge to produce globally coherent code, not just locally correct code.

The honest caveat: we don't have all the answers. We don't know what the right measures are yet. When agents are writing most of the code, what do you optimize for? Lines of code? PRs merged? Time to resolution? Something entirely new? We're figuring that out in real time, and we'll share what we find.

What comes next

Over the next few weeks, we'll go deeper on the specific projects coming out of this work. But the starting point isn't the projects. The starting point is what this post is about: creating the space to align on what AI-native actually means, surfacing the real emotions, and harnessing the intelligence of your whole team to figure out how to move forward.

Are you navigating this same shift? What's working? What's breaking? What questions keep you up at night?

Learning together feels like the only sensible approach in a moment like this.

Written by

Vinay Perneti

Vinay Perneti

VP of Engineering

As VP of Engineering, Vinay supports product, research, and engineering teams building AI agents that truly understand large, complex codebases. Before joining Augment, Vinay led product and platform organizations at Meta and Pure Storage. He’s drawn to problems that live at the intersection of technology and people, like how teams evolve, how AI reshapes the craft of software engineering, and what it takes to build things that evoke delight.

Get Started

Give your codebase the agents it deserves

Install Augment to get started. Works with codebases of any size, from side projects to enterprise monorepos.