Skip to content
Install
Back to Learn

Cursor, Windsurf, Claude Code system prompts collected in 131k-star GitHub repo

Mar 16, 2026
Molisha Shah
Molisha Shah
Cursor, Windsurf, Claude Code system prompts collected in 131k-star GitHub repo

TL;DR

  • A GitHub repo with 131K stars and 33.3K forks has published the full system prompts and tool configs for 30+ AI coding tools.
  • It covers Cursor, Windsurf, Claude Code, Augment Code, Devin AI, and others, with over 30,000 lines of prompt content.
  • For developers choosing between tools or building their own agents, it is the largest public teardown of how commercial AI coding assistants actually work.

A GitHub repository that collects the full system prompts, internal tools, and model configurations for over 30 AI coding tools has reached 131K stars and 33.3K forks. The repo, maintained by user x1xhlol, exposes the inner workings of tools like Cursor, Windsurf, Claude Code, Augment Code, and Devin AI. For developers choosing between AI coding assistants, this amounts to a public teardown.

What Happened

The system-prompts-and-models-of-ai-tools repository has been accumulating system prompts since early 2025, with 486 commits across 28 contributors. It contains over 30,000 lines of prompt content covering tools including Cursor, Windsurf, Claude Code, Augment Code, Devin AI, Replit, Lovable, Manus, v0, Kiro, Junie, VSCode Agent, Xcode, Warp.dev, and others.

The collection spans full system prompts, tool definitions (JSON schemas for function calling), and, in some cases, model version details. The most recent update landed on March 9, 2026, following recent additions, including Anthropic’s Claude Sonnet 4.6 prompt and updated v0 prompts. The repo is licensed under GPL-3.0.

Key Features

  • Full prompt text for 30+ tools: Raw system prompts for Cursor, Windsurf, Devin AI, Augment Code, Claude Code, and 25+ others.
  • Tool and function schemas included: Several entries include JSON tool definitions, the function-calling schemas these agents use.
  • Version-specific snapshots: The repo tracks prompt changes over time, letting you diff how prompts evolve across versions.
  • Broad coverage beyond code editors: Includes prompts from Perplexity, Notion AI, Manus, Lovable, and Same.dev, Dia, and others outside the code editor category.
  • Community-maintained: 28 contributors submit extracted prompts, updated as tools ship new versions.
  • Searchable by tool: Each tool gets its own directory, making it straightforward to compare prompt structures across competing products.
short and descriptive alt text for this image4:50 PMThe x1xhlol/system-prompts-and-models-of-ai-tools GitHub repository showing 131K stars, 33.4K forks, 486 commits, and 28 contributors across 30+ AI coding tool directories.

Why It Matters

  • Prompts reveal more than demos: System prompts define what an AI coding tool prioritizes, what guardrails it follows, how tools get invoked, and what assumptions the agent makes about your codebase. Reading them tells you more than any marketing page.
Live session · Fri, Mar 20

How a principal engineer at Adobe uses parallel agents and custom skills

Mar 205:00 PM UTCSpeaker: Lars Trieloff

  • Apples-to-apples evaluation: You can compare how Cursor's agent prompt structures coding sessions versus how Windsurf defines its tool interactions, or check what Devin's prompt actually includes. These are engineering decisions baked into text.
  • A real security consideration: The repo's README warns AI startups directly that exposed prompts can become targets. If your tool's entire behavior specification is public, competitors can study and replicate your prompt engineering.

Example Use Case

Your team is building on a TypeScript monorepo and debating between Cursor and Windsurf for agentic code edits. Before committing to a subscription or trialing both tools on your actual codebase, you pull up both prompts from the repo.

You compare how each structure’s instructions for code edits, what tools each agent can call, and what constraints each sets on the model. The JSON tool schemas show you the filesystem and search operations each agent exposes. The differences become concrete in an afternoon, without writing a single line of test code.

Competitive Context

The repo covers the major players side by side. Cursor has its Agent Prompt 2.0, showing how it structures agentic coding sessions. Windsurf includes tool definitions through Wave 11. Claude Code and Augment Code both appear with their prompt text, with Augment's entry including a GPT-5 tools JSON config. Devin AI includes both its core agent prompt and the DeepWiki prompt that powers its documentation features.

What stands out is the variation. Prompt lengths, tool schemas, and instruction patterns differ significantly across tools, giving you a real basis for comparison without subscribing to all of them.

Bottom Line

This repository gives developers a direct look at how the biggest AI coding tools actually work. If you are picking between Cursor, Windsurf, Claude Code, Augment, Devin, or any of the other tools listed, reading the raw prompts beats any product demo for technical comparison. Diff the prompts and make tooling decisions based on what the models are actually told to do.

You've seen what's under the hood. Try the tool built around context, not just prompts.

Try Augment Code

Free tier available · VS Code extension · Takes 2 minutes

Written by

Molisha Shah

Molisha Shah

GTM and Customer Champion


Get Started

Give your codebase the agents it deserves

Install Augment to get started. Works with codebases of any size, from side projects to enterprise monorepos.