July 10, 2025
Rolling Out AI Coding Assistants: How Drata Did It

AI has raised the bar
With AI accelerating across every stack within Drata, CTO, Daniel Marashlian, saw the stakes clearly: every engineering org now has to move faster — without eroding quality or trust.
But AI wasn’t seen as a replacement for engineers. Instead, Daniel framed it as a new programming model: LLMs are the runtime; prompting is the syntax.
The real prize? Enabling 200+ engineers to ship faster, without compromising compliance or quality. That meant evaluating vendors on concrete use-case wins, not hype.
What follows is the playbook Drata used to hit that productivity-plus-security target—and how your engineering org can adapt it.
Why it matters
- Executive urgency is real and measurable.
- Engineering leaders who wait will inherit unmanaged shadow-AI usage.
- A structured rollout beats a scatter-shot “just go get a tool” approach every time.
Snapshot: Inside Drata’s Engineering Org
Drata is a fast-growing AI-native Trust Management platform with 200+ engineers across three regions.
Key context:
- Security DNA. As a cybersecurity company, every new tool must clear stringent ISO 42001-mapped controls.
- Exploding surface area. Thirty frameworks, six product lines, and hundreds of customer requirements drive complexity.
- Talent mix. From AI skeptics to AI engineers, skills and natural interest vary on the team.
Call-out: Legacy is a privilege. Maintaining a code base that serves 8,000+ customers while shipping new features meant Drata needed tooling that adds leverage, not just novelty.
Five Filters Drata Applied Before Buying
Criterion | What Drata Looked For | Field Note |
---|---|---|
Accuracy & context-depth | Does the model nail edge-case paths? | “The main theme … was the accuracy of code results.” |
IDE fit & workflow friction | Runs where engineers already work. | Some engineers rejected solutions that forced a new IDE. |
Security/compliance posture | ISO 42001 alignment, data boundaries, AI architecture transparency. | Deep questionnaires probed multi-tenant vs. isolated models. |
Partner support & onboarding rigor | Slack-native customer success and co-development mindset. | “Customer success culture” was a decisive tie-breaker. |
Agent-level capabilities | Beyond autocomplete: test generation, planning, refactoring. | Unit-test coverage “improved instantaneously” with agent mode. |
The Seven-Vendor Bake-off
Drata refused to guess its way to a winner. Instead, it orchestrated a 30-day, seven-vendor sprint:
- Define use-cases. Boilerplate generation, complex unit tests, cross-service refactors.
- Select pilot cohort. Two pods (5-10 volunteers) per vendor — “if you’re in, you’re all in.”
- Mandate daily usage. Engineers had to lean on the assistant for every eligible task.
- Pressure-test with questions. Cohorts peppered vendors to see how the relationship would fare post decision.
- Security review in parallel. Each tool ran the full ISO-aligned questionnaire.
- Consolidate findings. Accuracy scores and partner engagement determined the finalist.
Pro tip: Drata’s bake-off ran in parallel to compress calendar time yet preserved an apples-to-apples cohort test.
Driving Adoption: Three Change-Management Levers
1. Hard-wired OKRs AI adoption became an explicit engineering objective. Success metric = every engineer touches the assistant at least once per week during the first quarter.
2. Champion Channel A public Slack channel called #wins-ai showcases screenshots of successful prompts and performance gains. Roughly half the posts now come from engineers using the coding assistant.
3. Education Push Engineers self-select Coursera, internal workshops, or vendor-led sessions, then log completed learning in a central tracker — “AI literacy is non-negotiable.”
Early Wins That Moved the Needle
- Unit-test turbo-boost. Complicated mocking suites that once took hours now materialize in minutes, lifting coverage of neglected paths.
- Boilerplate acceleration. Agents scaffold services fast enough to “reduce the majority of setup time by a huge factor”.
- Slack brag file. Dozens of #wins-ai posts document 5-10× task speed-ups each week — priceless for momentum.
“At some point I realized: this is going to transform our engineering process.” — Tony Bentley, Staff Engineer
What Drata Would Repeat — Checklist for Your Team
- Scope first, shop second. Tie vendor demos to your real use-cases.
- Run a multi-vendor cohort test with mandated daily use.
- Log accuracy, not just adoption. False positives erode trust faster than no suggestions.
- Pull security in on day one; the questionnaire will only grow.
- Demand tight customer success loops. Async Slack > weekly QBRs.
- Elevate AI literacy. Treat prompting like a new programming language.
- Celebrate wins publicly to normalize usage and surface best practices.
Closing POV: AI Literacy Is the New Git Literacy
Coding assistants are no silver bullet, but they raise the baseline for productivity and quality. Drata’s experience shows that world-class teams treat AI like any critical runtime: they evaluate rigorously, adopt intentionally, and teach continuously.
“LLMs are just another language; prompting is the syntax. Master it or fall behind.” — Jono Stiansen, AI Engineer, Drata
Ready to start? Draft your evaluation plan, pick a pilot cohort, and schedule your first vendor gauntlet. Treat AI like learning a new skill or tool, not a magic wand. And don’t forget to celebrate your wins.

Molisha Shah
GTM and Customer Champion