Install
Back to Blog

Cultivating a culture of AI experimentation: A practical guide for engineering leaders

Jan 30, 2026
Vinay Perneti
Vinay Perneti
Cultivating a culture of AI experimentation: A practical guide for engineering leaders

Many engineering leaders tell us the same thing: they know AI can meaningfully augment their teams, but the path forward feels unclear. There’s pressure to “get it right” that often leads to overengineering: starting with the hardest problems, building flashy prototypes with no real owner, or never making the leap from personal tinkering to team-wide impact.

The good news is that we’re seeing enough early success stories to know what works. A repeatable pattern is emerging among the companies making the fastest progress: they start with real engineering pain, keep their experiments small, and let genuine value point them toward the right next step.

I recently spoke with engineers from three organizations who are embracing this approach and seeing meaningful results. Their stories offer a practical roadmap for anyone looking to build a culture of experimentation around AI. Here’s what we learned and how you can apply it.

Learn from Redis: Use AI as a shortcut to lost context

There’s plenty of hype about AI “replacing” engineers, but early success is emerging in a much more practical area: reducing context switching and accelerating work that normally pulls engineers out of flow.

Like many companies we work with, Redis has engineers who span time zones, codebases that date back years, and knowledge base churn that make it hard for any one person to be totally up to speed on the complete codebase. When Josh Rotenberg, Principal Field Engineer at Redis, ran into a customer request that involved a Java library that hadn’t been touched in a while, he decided to leverage Augment’s deep codebase understanding to see if it could help him find the context he needed to make a fix in days, not weeks.

“I found a ton of value in being able to be an archaeologist and go find things that are not in a good state and, within a couple of hours, maybe a couple of days, bring them back to life and make them useful again,” Josh told me.

By using Augment to explain modules, outline what a new feature would require, and even suggest test coverage, Josh reclaimed time normally lost to spelunking through old files.

Key learning: Start with the work that makes your job hardest.

Context-aware AI helps decrease onboarding time for new engineers and help even seasoned engineers keep track of all of the architecture decisions, codebase idiosyncrasies, and decision tradeoffs that go into complex legacy codebases. Tasks that normally require deep, tedious digging, like reviving old code, learning unfamiliar tech, and reconstructing missing context, are ideal places to begin.

Learn from Workday: Let exploration surface valuable business insights

Every developer has important-but-not-urgent tasks that pile up as other projects take priority, and that's where Workday's Anand Kumar Sankaran, Senior Principal Engineer, decided to start his AI journey.

Workday has a complex data model described in a 650MB JSON file. In the pre-AI world, a developer would have to work with a variety of parsing tools to explore the schema and ask questions, making for a slow and tedious process. Anand started by wondering how he could iterate faster on understanding this massive model.

He loaded the data model into Augment's context engine and began asking questions. When performance was slow, Anand asked Augment how it could improve. The agent didn't just identify the issue, it proactively broke down the massive file into multiple smaller files. Before he knew it, Anand was conversing with Augment agents that fully understood the Workday schema like an expert analyst. The productivity gains were dramatic: tasks that used to take days now took minutes.

With this new collaborative approach, Anand began asking more exploratory, open-ended questions. What started as schema exploration evolved into something completely unexpected: a succession planning proof of concept that eventually became a business analyst agent, a direction he never could have planned from the start.

Key learning: Let use cases emerge from hands-on exploration.

Anand didn't choose a mission-critical project or set out to build an agent. Instead, he picked a complex but non-urgent task and focused on learning what AI could actually do through iterative experimentation. This exploratory approach revealed valuable use cases organically and created a success story that helped grow AI adoption across his team without forcing it on anyone.

Learn from Garner Health: Use AI to solve a real pain point first

Forrest Thomas, Distributed Systems Architect at Garner Health, is seeing AI tool use grow as more developers experience it firsthand through a tool the team developed to debug CI pipeline failures, which were a source of frustration and cognitive overhead for developers and a support burden on the developer experience team.

By providing additional context for CI job failures, the pipeline analyzer, built on Augment’s context-aware CLI, has reduced the number of support tickets that come into the developer experience team, and now it’s helping developers uncover other opportunities where the tool could be useful while simultaneously training the AI.

“I had a conversation with a developer saying, ‘How can I improve the instruction files for this, because it led me down the wrong path on this test failure,’” Forrest explained. “It's like, ‘Here's the repo with all the instruction files. Add your specific failure. Add the steps that you had to follow.’ And now that’s embedded within the Augment context engine the next time that failure shows up. So it builds on itself over time.”

Key learning: Demonstrate real, valuable use cases and adoption will follow.

Rather than issuing blanket use mandates, forward-thinking organizations like Redis are using champions who build out use cases that help other engineers see the promise of AI. By showing developers how the tools can help unburden them from confusing or cumbersome parts of their job, enthusiasm and the culture of experimentation around AI grows.

Make it your own

So how do you encourage this culture of experimentation around AI? Across these stories, a clear pattern emerges. Adoption should center around helping engineers do what they do best: exploring, debugging, experimenting, and refining. Here’s how to create the conditions for success.

  • Identify a team of early champions and make tools available. Champions should be enthusiastic about the potential for AI tools in their workflow and be willing to share early successes and answer questions with the rest of the team.
  • Encourage personal experimentation around the question, “Where can AI make your job easier?” What do you find yourself putting off until Friday afternoon? What tasks make you groan? Start there.
  • Let the best use cases bubble to the top. Test what you’re building, share it with others. You’ll know when you hit on something valuable.
  • Test, refine, and roll it out. Once you find use cases that work for your team, put them into action and celebrate the wins. Simultaneously, think about how you can use AI to help build more AI use cases through tasks like automated code reviews, for example.
  • Make it fun. It’s not about forcing, enthusiasm should be there. This isn’t about AI replacing jobs, it’s about engineers using their creativity and curiosity to find more efficient ways to do their work, allowing them to focus on the things they love the most.

Chart your path forward

What do these stories have in common?

In each case, the progression was: start small, get value fast, and let usage grow organically. But it all began with choosing the right technical partner.

Generic tools simply don’t go deep enough to provide meaningful results. When your AI truly understands your development context, champions become genuine advocates because the tool actually makes their hardest work easier, creating the authentic enthusiasm needed to drive organization-wide adoption.

Ready to chart your path forward? Download our AI adoption playbook, a comprehensive framework that helps you move from experimentation to integration across your entire software development life cycle and includes even more stories from the field.

Written by

Vinay Perneti

Vinay Perneti

Vinay Perneti is an Engineering Director at Augment Code, where he supports product and research teams working on code generation and AI agents. He is very passionate about doing the hard work in delivering delight to users. Previously, he was the head of Observability Platforms at Meta and prior to that led Protocols and Filesystems orgs at Pure Storage. He graduated with a Masters in Filesystems and Networking at Carnegie Mellon University.

Get Started

Give your codebase the agents it deserves

Install Augment to get started. Works with codebases of any size, from side projects to enterprise monorepos.