Skip to content
Install
Back to Learn

GitHub repo leaks system prompts for 28+ AI coding tools: what developers should know

May 11, 2026
Molisha Shah
Molisha Shah
GitHub repo leaks system prompts for 28+ AI coding tools: what developers should know

Three things worth knowing

  • A single GitHub repo has collected the full system prompts, internal tools, and model configurations for 28+ AI coding tools. It just crossed 137K stars and 34.2K forks.
  • For the first time, developers can read the actual instructions each tool sends to its model, rather than relying on marketing copy or guesswork.
  • This raises a practical question for developers (what are these tools actually doing?) and a harder one for AI startups (how protected are your own prompts?).

Most AI coding tool comparisons come down to vibes. You try Cursor for a week, Windsurf for a week, and form an opinion. You read some benchmarks. You ask on Reddit.

There's a better way now. A GitHub repository maintained by developer Lucas Valbuena has been quietly collecting the raw system prompts of every major AI coding tool since early 2025. x1xhlol/system-prompts-and-models-of-ai-tools just crossed 137K stars and 34.2K forks. That's not curiosity traffic. Developers are reading these prompts.

The x1xhlol/system-prompts-and-models-of-ai-tools GitHub repository showing 137K stars, 34.2K forks, and a directory listing of 28+ AI tools including Cursor, Windsurf, Augment Code, and Devin AI.

What Happened

The repository was last updated on May 10, 2026, and now spans 496 commits across the following directories: Cursor, Windsurf, Claude Code, Augment Code, Devin AI, Kiro, Lovable, Manus, Replit, v0, VSCode Agent, Xcode, and more.

Each directory contains raw system prompt text files and, in many cases, JSON files describing internal tool definitions. The Augment Code directory includes a gpt-5-tools.json file added in August 2025. Cursor's folder contains an "Agent Prompt 2.0" from November 2025. The project has 28 contributors, 1.6K watchers, and a Discord server (LeaksLab) supporting ongoing collection.

What I'd flag here is that the commit history shows prompts being updated regularly as vendors change their instructions. This isn't a static snapshot. It's a living record of how these tools evolve.

Key Features

  • 28+ tools covered: Augment Code, Claude Code, Cursor, Windsurf, Devin AI, Kiro, Junie, Replit, Lovable, Manus, v0, VSCode Agent, Xcode, Warp.dev, Trae, and others all have dedicated directories with extracted prompts.
  • Tool definitions included: Several entries ship with JSON files mapping internal tool schemas, function calls, and model routing logic. Augment Code's gpt-5-tools.json and Windsurf's multi-wave tool configs are the most revealing examples.
  • Version history preserved: With 496 commits, you can track how prompts have changed over time. Cursor's agent prompt has at least two documented versions, which tells you something about where they were struggling.
  • Security notice baked in: The repo explicitly warns AI startups about prompt injection and extraction risks and links to a mitigation service called ZeroLeaks. That warning is in the README of the repo that extracted the prompts. Worth reading twice.
  • Active community: 28 contributors and a Discord server suggest this collection effort is ongoing, not abandoned.

Why It Matters

Reading a tool's system prompt is more informative than reading its marketing page.

The prompt tells you what the model has been explicitly instructed to do: how it handles multi-file edits, whether it asks for confirmation before destructive operations, how it structures agent loops, and what it's explicitly told to avoid. That's the real product.

For developers evaluating AI coding tools, these prompts reveal differences that don't show up in feature comparison tables. For tool builders, the repo is a practical warning: if your differentiation lives entirely in a system prompt, competitors and users can read it. The 137K stars tell you the developer community treats this as a valuable signal, regardless of whether vendors intended to share it.

Example Use Case

A team migrating a TypeScript monorepo from Windsurf to Cursor can compare the agent prompts for both tools side by side before committing to the switch.

By reading Cursor's Agent Prompt 2.0 and Windsurf's latest system prompt, they can see exactly how each tool handles multi-file edits, project context limits, and test generation. If Cursor's prompt includes explicit instructions for preserving import paths during refactors and Windsurf's doesn't, that difference directly informs the migration decision. They can also pull patterns from these prompts into their own Cursor rules file.

This is the research I'd do before any serious tool evaluation. It takes 30 minutes and tells you more than a week of trial accounts.

Competitive Context

Cursor and Windsurf both have detailed prompt histories in the repo. Cursor's directory includes its second-generation agent prompt from November 2025. Windsurf's collection runs through Wave 11 tool configurations, showing rapid iteration through mid-2025. Two versions from the same tool, separated by a few months, tell you more about where a product is evolving than any changelog.

Open source
augmentcode/augment-swebench-agent872
Star on GitHub

Augment Code's entry includes a gpt-5-tools.json config from August 2025. Devin AI's directory contains a DeepWiki prompt alongside its main system prompt. Junie has a single system prompt from May 2025 compared to Cursor's multi-file directory. The Anthropic directory tracks prompts up through Claude Sonnet 4.6.

The JSON tool definition files are the part I find most interesting. Knowing which tools a model has been given access to tells you more about a product's actual capabilities than the feature page does.

My Take

This repository is required reading for anyone building, evaluating, or competing with AI developer tools. The prompts are public, searchable, and versioned. Benchmarks tell you how a tool scores. These prompts tell you why.

For AI startups: your system prompts are probably not as private as you think. Whether or not that changes how you design your product, it's worth knowing before a competitor reads yours.

Understanding how AI tools are built is step one. Building a system where agents work reliably across your entire engineering org is step two

See Cosmos in action

Free tier available · VS Code extension · Takes 2 minutes

Written by

Molisha Shah

Molisha Shah

GTM

Molisha is an early GTM and Customer Champion at Augment Code, where she focuses on helping developers understand and adopt modern AI coding practices. She writes about clean code principles, agentic development environments, and how teams are restructuring their workflows around AI agents. She holds a degree in Business and Cognitive Science from UC Berkeley.


Get Started

Give your codebase the agents it deserves

Install Augment to get started. Works with codebases of any size, from side projects to enterprise monorepos.