Skip to content
Install
Back to Learn

GSD hits 58.9K stars as spec-driven dev system for Claude Code

Apr 30, 2026
Molisha Shah
Molisha Shah
GSD hits 58.9K stars as spec-driven dev system for Claude Code

Three things worth knowing

  • Get Shit Done (GSD) is an open-source meta-prompting system that solves context rot in Claude Code and 13 other runtimes, now at 58.9K GitHub stars.
  • It breaks work into atomic plans executed in fresh context windows, keeping your main session at 30-40% usage while subagents do the heavy lifting.
  • If you've hit quality walls on longer AI coding sessions, this is the most structured fix I've seen at the community level.

Context rot is a problem most developers hit, and few talk about openly. As Claude fills its context window during long coding sessions, output quality drops: responses get shorter, instructions get missed, and code gets inconsistent. Get Shit Done (GSD) is the most structured open-source attempt I've seen at solving it systematically.

Created by developer TÂCHES, GSD adds structured context engineering and spec-driven workflows on top of Claude Code and now supports 14 runtimes. It just crossed 58.9K GitHub stars, which tells me this problem is more widespread than most teams admit, and that developers are done waiting for model vendors to fix it.

The gsd-build/get-shit-done GitHub repository showing 58.9K stars, 5K forks, and a directory listing including agents, hooks, and sdk folders.

What Happened

GSD launched in December 2025 and has accumulated over 2,100 commits, 138 contributors, and 57 releases in roughly four months. The latest stable release is v1.38.5 (April 25, 2026), with v1.39.0-rc.4 in pre-release.

The system installs via npx get-shit-done-cc@latest and works across Claude Code, Gemini CLI, Codex, Copilot, Cursor, Windsurf, Augment, and seven other runtimes. The installer auto-detects and configures the right file layout for each runtime, which is the kind of detail that makes the difference between a tool you evaluate and one you actually adopt.

What I'd flag here is the pace. The growth from December 2025 to 58.9K stars by April 2026 reflects genuine utility, not just Twitter virality.

Key Features

  • Spec-driven workflow loop. A structured discuss, plan, execute, verify, and ship cycle that breaks projects into phases, each with its own research, planning, and execution steps.
  • Fresh context per plan. Execution spawns parallel subagents, each with a clean 200K-token context window. The main session stays at 30-40% utilization throughout.
  • Wave-based parallel execution. Plans are grouped by dependency into waves. Independent plans run simultaneously; dependent plans wait. Each task gets its own atomic git commit.
  • Minimal install mode. A --minimal flag cuts cold-start token overhead from ~12K to ~700 tokens, a 94% reduction. Worth knowing if your team uses local LLMs or token-metered APIs.
  • Multi-agent orchestration. Specialized agents handle research, planning, execution, and verification. The orchestrator spawns them, collects results, and routes to the next step without your main session taking on the load.
  • Built-in security hardening. Path traversal prevention, prompt injection detection, and a CI-ready injection scanner that catches vectors in agent and workflow files.

Why It Matters

AI coding tools produce inconsistent output at scale. The longer a session runs, the worse the results get. GSD treats the context window as a managed resource: break work into atomic plans, run each in a fresh subagent context, and stitch results together with git commits.

For teams, the practical result is real. A solo developer or small group can run multi-phase projects with structured planning, parallel execution, and automated verification, all without manual context management. The system handles research, creates XML-structured task plans, executes them in fresh agent contexts, and verifies results against stated goals.

The 14-runtime support is also worth noting. Developers on Cursor or Windsurf can use the same workflow system as those on Claude Code. One install, consistent behavior across tools.

Example Use Case

A developer building a Next.js e-commerce app runs /gsd-new-project to initialize. The system asks questions, spawns research agents, and produces a phased roadmap.

For Phase 1 (user authentication), they run /gsd-discuss-phase 1 to specify email and password login with JWT via the jose library. Then /gsd-plan-phase 1 generates two plans: one for the user model, one for the auth API. Because they're independent, GSD groups them into the same wave and runs them in parallel with /gsd-execute-phase 1. Each plan runs in a fresh context. Each task gets its own commit. After execution, /gsd-verify-work 1 walks through manual acceptance testing. If the login fails, the system spawns debug agents and automatically generates fix plans.

This is the workflow I'd walk through with a team to ask how to make AI-assisted development actually reliable at scale. The answer here is concrete enough to evaluate in an afternoon.

Competitive Context

GSD positions itself directly against other spec-driven tools. The README calls out BMAD and Speckit specifically, arguing that they add unnecessary enterprise ceremony: sprint ceremonies, story points, and stakeholder syncs. GSD takes the opposite position. Complexity lives in the system, not the workflow.

Open source
augmentcode/augment-swebench-agent870
Star on GitHub

Claude Code remains the primary target. GSD's skill files install to .claude/skills/ and its slash commands like /gsd-execute-phase are native Claude Code patterns. For teams using other runtimes, the same workflow applies; the installer handles the differences. The --minimal install mode directly targets developers using local models or token-metered APIs, where the default 86-skill surface creates real cost pressure.

What I find worth watching is where GSD sits relative to ECC and other Claude Code configuration layers. GSD focuses on workflow structure and context management. ECC focuses on agent standardization and security. They're solving different problems, and I'm seeing teams use both.

My Take

GSD is a structured workflow layer that keeps AI coding agents reliable across long sessions. If you use Claude Code, Cursor, Windsurf, or any of the 13 other supported runtimes for serious development work and have hit context degradation on larger projects, this is worth evaluating.

Install with npx get-shit-done-cc@latest and run /gsd-help to start. At 58.9K stars, 5K forks, and 138 contributors with daily commits, this has moved well past side project territory.

Intent brings deep, persistent codebase context out of the box, no scaffolding required.

Build with Intent

Free tier available · VS Code extension · Takes 2 minutes

Written by

Molisha Shah

Molisha Shah

GTM

Molisha is an early GTM and Customer Champion at Augment Code, where she focuses on helping developers understand and adopt modern AI coding practices. She writes about clean code principles, agentic development environments, and how teams are restructuring their workflows around AI agents. She holds a degree in Business and Cognitive Science from UC Berkeley.


Get Started

Give your codebase the agents it deserves

Install Augment to get started. Works with codebases of any size, from side projects to enterprise monorepos.