Install
Back to Blog

We're in an exponential

Feb 12, 2026
Vinay Perneti
Vinay Perneti
We're in an exponential

One of my favorite questions to ask in 1-1s is how folks are using agents in their day-to-day work. Over the past month, I noticed a shift bigger than anything I'd seen before.

A significant majority of senior engineers at Augment have crossed a threshold from agents assisting their work to agents writing the majority of their code. Their role has evolved to providing context, setting guardrails, and ensuring the right outcomes. Opus 4.6 and GPT 5.2 combined with Auggie's Context Engine is enabling tasks that simply weren't possible two months ago.

At the same time, I started noticing signals from the broader industry:

I had to step back and make sense of what I was seeing. These aren't vibe-coded side projects. This is real-world software engineering.

It very much feels like we're in an exponential, and the tricky thing about exponentials is you don't realize you're in one until you hit the inflection point.

The two phases that make exponentials so hard to see

Humans are not great at understanding exponentials. We expect change to be linear. Tomorrow looks a lot like today, just a bit better or a bit faster. Exponentials don't work that way.

There’s a children’s book I’ve been reading with my 6-year-old son called "One Grain of Rice: A Mathematical Folktale." It tells a story that captures this better than most business metaphors ever could.

The story takes place during a famine, where the king tightly controls the amount of rice each family receives.

One day, a young girl notices rice spilling from a hole in one of the storehouses. Grain by grain, she carefully collects the rice and returns it to the storehouse, without asking for anything in return.

When the king finds out, he is pleased.

He summons the girl and says, "You have helped my kingdom. Ask for any reward you wish."

She says, "I would like one grain of rice."

The king laughs. "That is all?"

She continues, "One grain today. Two grains tomorrow. Double it every day, for thirty days."

Amused by what seems like a trivial request, the king agrees immediately.

Phase 1: Nothing seems to be happening

The days pass. One grain. Two. Four. Eight.

Up until about Day 20, everything feels manageable. The total amount of rice is roughly one large sack. The storehouses are still full. Nothing feels out of control. The king feels confident he made a smart deal.

This is the phase where most people decide the risk was overstated.

Phase 2: Everything happens at once

Then the doubling starts to matter.

After Day 20, the quantities accelerate rapidly. One sack becomes many. Sacks turn into cartloads. In just the final stretch, the amount of rice explodes, overwhelming the storehouses.

The king realizes too late that doubling changes everything. What looked harmless wasn't. It was exponential.

Exponentials don't announce themselves. They whisper at first, then overwhelm. The whole world is going to wake up to this sooner than later.
Curve showing the trajectory of an exponential

Exponentials have two phases: nothing appears to be happening, then everything happens all at once.

The inflection point is here

At this point, many of you have probably seen the METR paper tracking agent progress over the past five years. The data shows that agents can now complete tasks that take humans four to five hours with a 50% success rate. And all signs point to agents getting 10x better than what we have today.

Sit with that for a moment: the agents we have now will be the worst agents we'll use for the rest of 2026.

Then came the realization. Two things are true:

  1. Models have been improving at an exponential rate. And they will continue to do so for the rest of 2026.
  2. Agents will continue getting better at an exponential rate.

That doesn't mean models or the agents built on top of them are perfect today, but it does mean that improvement is compounding, and compounding changes the calculus.

table showing task-completion time horizons for public language models

Measurement of the task-completion time horizons for public language models, courtesy of metr.org.

You might be thinking: we've seen exponentials before: mobile, the cloud, the internet.

Those were adoption curves. The technology itself was relatively fixed, and the exponential came from how quickly people adopted it. Your iPhone didn't become fundamentally more capable as more people used it. The internet didn't unlock new primitives every quarter.

AI is different. This time, the tool itself is improving. Models are learning to do things they couldn't do months ago, and agents built on top of them inherit those gains immediately. We're not just riding an adoption curve. We're riding a capability curve. And there's no obvious ceiling in sight.

What do you do in an exponential?

So what does this all mean? No one has it all figured out. Most of us have never lived through a true exponential before. There isn't a playbook you can follow step by step.

When you are in an exponential, positioning matters more than precision, though, so the usual instincts don't work particularly well. Waiting for perfection, demanding certainty before acting, treating today's limitations as fixed constraints: these seem irrelevant in an exponential.

What sounded extreme not so long ago, that agents would be writing the majority of code, feels almost like a default assumption today. So the question becomes, "What and how do we build to be ready for the default assumptions of tomorrow?"

  • What does quality mean when agents are prolific?
  • What context do they need to produce good outcomes?
  • Where do humans add the most leverage?
  • What new failure modes should we expect?

We're addressing these questions in real time, alongside our customers. It's not always comfortable, but that's the nature of riding an exponential rather than reacting to it. We're putting agents into parts of the SDLC where they're not perfect, because that's how we learn what needs to improve. And we'll be sharing what we're seeing, what's working, and what's breaking as we go.

Learning together feels like the only sensible approach in a moment like this. So, I'm curious: Are you feeling this exponential in your own work? Have agents crossed a threshold for you yet? What questions are keeping you up at night? Reality is shifting. Let's talk about how we meet it together.

Written by

Vinay Perneti

Vinay Perneti

VP of Engineering

As VP of Engineering, Vinay supports product, research, and engineering teams building AI agents that truly understand large, complex codebases. Before joining Augment, Vinay led product and platform organizations at Meta and Pure Storage. He's drawn to problems that live at the intersection of technology and people, like how teams evolve, how AI reshapes the craft of software engineering, and what it takes to build things that truly delight developers.

Get Started

Give your codebase the agents it deserves

Install Augment to get started. Works with codebases of any size, from side projects to enterprise monorepos.