Skip to content
Install
Back to Guides

Intent Walkthrough: From Prompt to Merged PR

Apr 26, 2026Last updated: Apr 27, 2026
Ani Galstian
Ani Galstian
Intent Walkthrough: From Prompt to Merged PR

Intent runs a Coordinator-Implementor-Verifier pipeline where agents share a living spec and pass through three human review checkpoints before any code reaches the repository. Spec review, task decomposition, and final diff review keep developers in control of every decision throughout the session. This walkthrough covers the complete flow from install to merged code, including the spec edits that consistently pay off, the model routing that controls cost, and the gaps Intent does not yet cover.

TL;DR

Most AI coding tools run agents with independent prompts and no shared plan. Intent coordinates agents through a living spec: a Coordinator drafts the plan, Implementors execute in parallel git worktrees, and a Verifier checks results before the developer reviews diffs and merges. Three human checkpoints (spec, tasks, diffs) keep developers in control while parallel agents handle execution.

Workflow at a Glance

For readers who want the shape of the session before the details:

  1. Prompt: describe the feature in natural language
  2. Spec: Coordinator drafts; developer reviews and edits (Checkpoint 1)
  3. Tasks: Coordinator decomposes; developer reviews breakdown (Checkpoint 2)
  4. Execute: Implementors work in parallel worktrees
  5. Verify: Verifier checks results against the spec
  6. Review: developer inspects diffs in the built-in editor (Checkpoint 3)
  7. Commit and PR: auto-commit stages changes; Intent creates the pull request with auto-generated description
  8. Merge: developer merges from within Intent

The Missing Intent Getting-Started Guide

Developers searching for an Intent walkthrough hit a specific gap. The official Intent docs explain what the product does without showing how a real session flows from first prompt to merged code. Dedicated quickstart and getting-started sub-pages remain unpublished, and no public guide walks through a real session end to end.

This guide fills that gap. It covers the complete lifecycle of building JWT auth middleware, the same scenario shown on the Intent product page, and includes the decisions that matter: which spec edits pay off at Checkpoint 1, when to route to Haiku vs. Opus, and when to skip Intent for a hotfix. Each step maps to a specific screen, action, or decision point so developers can replicate the workflow on their own repositories.

See how Intent's living specs keep parallel agents aligned from first prompt to merged PR.

Build with Intent

Free tier available · VS Code extension · Takes 2 minutes

Setup: Install Intent and Connect a GitHub Repo

Setup runs on macOS, installs the Intent desktop app, optionally adds the Auggie CLI for native Augment agents, and connects a GitHub repository for the PR workflow.

Prerequisites

RequirementDetails
Operating systemmacOS, Apple Silicon (Intel not supported during public beta)
Node.jsVersion 22 or later (required for Auggie CLI)
Shellzsh, bash, or fish
Augment Code accountRequired; sign up at augmentcode.com

Full CLI requirements are in the Auggie command-line documentation.

Install Intent and Auggie

Download the .dmg from augmentcode.com, install it, and launch Intent by Augment. Then install the CLI:

bash
npm install -g @augmentcode/auggie
auggie login

The CLI uses the auggie login flow with locally stored session tokens.

Auggie or BYOA? With a paid Augment Code plan in place, install Auggie to get the native Coordinator-Implementor-Verifier setup, the Context Engine, and the full specialist agent set. With a Claude Code, Codex, or OpenCode subscription already paid for, skip Auggie and start in BYOA mode, then add Auggie later if Context Engine integration becomes a bottleneck.

Connect a GitHub Repository

Intent's PR workflow depends on the GitHub App, installed through the Augment Code dashboard:

  1. Visit app.augmentcode.com/settings/code-review
  2. Click Install GitHub App
  3. Grant access to the repositories Intent will work with

The GitHub App install instructions cover the dashboard flow in detail.

Setup Pitfalls Worth Knowing

A handful of issues account for most setup friction:

  • Installing the GitHub App directly from GitHub breaks the tenant connection. Always install through the Augment Code dashboard.
  • Node.js below v22 causes silent CLI failures. Run node --version before npm install.
  • auggie login hanging usually means a stale browser session or blocked redirect. Open the auth URL manually in a fresh browser profile.
  • Intel Macs are not supported during public beta. Apple Silicon is the only tested target.

Step 1: Describe a Feature in Natural Language

With Intent installed and authenticated, create a new Space, describe the project in the prompt box, and click Create space.

What Makes a Good Intent Prompt

Prompt quality directly shapes the Coordinator's first spec draft, which directly affects how much editing Checkpoint 1 requires. The living specs guide covers the underlying principles. The short version is four rules:

  • Separate requirements, constraints, and success criteria. Mixing them produces specs with ambiguous acceptance.
  • Quantify success. "Responds within 200ms at p95" beats "load fast."
  • Provide file:line anchors when known control points exist (middleware registration, route tables, config loaders).
  • Include exact build, test, and lint commands. "npm run lint" beats "lint passes."

A good prompt for the JWT example:

text
Add JWT auth middleware: acme-corp/api-gateway
Requirements:
- RS256 signing for access tokens
- 15m access token / 7d refresh token lifetime
- Redis revocation list for invalidated tokens
- Protect /api/v1/* endpoints, exclude /api/v1/health
Success criteria:
- All existing endpoint tests pass
- New middleware tests cover token validation, expiry, and revocation
- npm run lint exits clean

A weaker version of the same prompt: "Add JWT auth to the API. Make sure tokens expire and we can invalidate them. Should be secure." That prompt produces a spec the developer rewrites at Checkpoint 1 because every requirement is ambiguous, no commands are specified, and "secure" has no acceptance criterion.

Space Configuration

FieldRequiredPurpose
NameYesIdentifies the workspace
LocationYesLocal directory path
Git repositoryNoLinks to a GitHub repo for PR workflow
DescriptionNoOrganization and context
TagsNoOrganization and filtering

Creating a Space automatically creates a dedicated git branch and worktree, with no separate user-triggered step required. The isolation pattern is covered in the git worktrees for parallel agents guide.

Step 2: Coordinator Drafts a Living Spec

Once the Space is created, Intent shifts into a tabbed workspace layout that becomes the primary work surface for the rest of the session.

TabContents
FilesFile tree for the workspace and active worktree
ContextLinked references the Coordinator pulls into the session
SpecThe living specification, opens when the Coordinator first writes to it
ChangesDiff view of in-progress and completed agent work

The Coordinator analyzes the codebase, drafts the spec, generates tasks, and delegates to specialist agents. The Spec tab populates automatically when the Coordinator first writes to it. The Coordinator-Implementor-Verifier guide breaks down each agent's responsibilities in more detail.

What a Living Spec Looks Like

For the JWT prompt above, the Coordinator's first draft typically lands close to:

yaml
feature: jwt-auth-middleware
target_files:
- src/middleware/auth.ts # new
- src/middleware/index.ts # register middleware
- src/server.ts # mount on /api/v1/*
- src/lib/redis.ts # extend for revocation list
- tests/middleware/auth.test.ts # new
config:
signing_algorithm: RS256
access_token_ttl: 15m
refresh_token_ttl: 7d
protected_paths: [/api/v1/*]
excluded_paths: [/api/v1/health]
acceptance:
- existing endpoint tests pass unmodified
- new tests cover: valid token, expired token, revoked token, missing header
- npm run lint exits 0
open_questions:
- revocation key prefix in Redis (default: jwt:revoked:)
- clock skew tolerance for exp claim (default: 30s)

The spec is the single source of truth. Every agent reads from it and writes to it, and edits propagate to all active agents mid-session, which is what prevents spec rot.

Human Checkpoint 1: Edits That Pay Off

The Coordinator's first draft is rarely the final plan. Stop the Coordinator and edit before approving. The edits that consistently reduce rework at Checkpoint 3:

  • Tighten file scope. Remove speculative refactors the prompt did not request. If the spec lists files outside the feature surface, delete them.
  • Resolve open questions inline. Pick the Redis key prefix, set the clock skew, decide refresh token rotation. Ambiguity at this stage produces inconsistent Implementor work.
  • Add the test paths the Coordinator missed. Coordinators frequently underspecify negative cases (revoked tokens, malformed headers, expired refresh tokens).
  • Cap the blast radius. When the spec touches a shared module, add a note like "Do not modify exported signatures of src/lib/redis.ts; extend only."

Five minutes of spec editing here regularly saves an entire Implementor wave's worth of credits.

Step 3: Implementor Agents Execute in Parallel Workspaces

Once the spec is approved, the Coordinator decomposes the work into tasks and delegates to Implementor agents running in parallel waves, each in its own isolated worktree.

How Parallel Actually Works

A few practical points the product surface doesn't make obvious:

  • Fan-out depends on the work. The product page's JWT demo, for example, runs two primary delegated agents (an Auth Token Agent and a Gateway Middleware Agent) alongside background agents for the test suite, lint and type checks, and docs generation. The Coordinator chooses how many specialists to spawn based on how cleanly the spec decomposes.
  • Worktrees prevent collisions during execution. Each Implementor commits to its own branch off the Space's base.
  • The Coordinator reconciles file overlap at the merge step rather than during execution. When two tasks touch the same file, the Coordinator orders them sequentially or merges branches at the end. Ambiguous specs cause the most rework here.
  • A typical wave runs 3-12 minutes for well-scoped tasks on Haiku, longer for ambiguous tasks on Sonnet.

For the JWT example, the primary Implementors split token validation logic from middleware registration and route mounting, while background agents handle test coverage, linting, and doc updates against the same living spec.

Explore how Intent's parallel agent execution runs multiple Implementors in isolated worktrees without branch conflicts.

Build with Intent

Free tier available · VS Code extension · Takes 2 minutes

ci-pipeline
···
$ cat build.log | auggie --print --quiet \
"Summarize the failure"
Build failed due to missing dependency 'lodash'
in src/utils/helpers.ts:42
Fix: npm install lodash @types/lodash

Human Checkpoint 2: Task Decomposition Review

Before Implementors execute, review the Coordinator's task breakdown. Two patterns to flag:

  • Tasks with overlapping file targets that aren't explicitly ordered. Either split them or sequence them.
  • Tasks whose acceptance criteria don't trace back to the spec. When a task says "refactor the request logger" but the spec doesn't mention logging, delete the task.

Model Selection Affects Cost and Speed

A typical Coordinator-plus-Implementor session with mixed routing costs roughly 1,200 to 1,500 credits, compared to 2,500 to 3,000+ when every agent runs Opus at full price. The full breakdown is in the Intent pricing guide.

Recommended routing for cost-effective sessions, aligned with the official Intent pricing guide:

RoleRecommended ModelCredits per Task
CoordinatorSonnet 4.6 (or Gemini 3.1 Pro)293
Implementors (well-scoped)Haiku 4.588
Implementors (ambiguous scope)Sonnet 4.6293
VerifierGPT-5.2, GPT-5.4, or Sonnet 4.6293-420

The pricing guide notes that the Coordinator rarely needs Opus-level depth for task decomposition and delegation, so Sonnet 4.6 is the recommended default for that role. Reserve Opus 4.7 for long-running tasks, deep reasoning, and architectural decisions where the extra cost pays off.

Launch discount: Augment Code is running 50% off Opus 4.7 through April 30, 2026. During that window, an Opus-routed Coordinator drops to roughly 244 credits per task. The rates above reflect standard post-promo pricing.

On the Indie plan (40,000 credits per month at $20/mo), the routing above supports roughly 27 to 33 sessions per month at standard rates, assuming 1 Coordinator on Sonnet 4.6, parallel Implementors on Haiku 4.5, and 1 Verifier on Sonnet 4.6 or a GPT-5.x model.

When Opus-everywhere is worth it. Routing every agent to Opus is rarely the right default, but one case justifies it: high-stakes refactors where the cost of Implementor rework (re-reading large dependency graphs, re-running test suites, re-reviewing diffs) exceeds the model spend difference. For a security middleware change touching authentication paths, or a payment integration where a wrong constant is a production incident, the extra ~1,500 credits is cheaper than a second pass.

Step 4: Verifier Checks Against the Spec

After Implementors complete their tasks, the Verifier compares the built code to the living spec. The pre-merge verification guide covers the role in depth.

The Verifier focuses on structural compliance: does the JWT middleware use RS256, do token lifetimes match the 15m/7d parameters, does the Redis revocation list cover the invalidation scenarios in the spec? When the Verifier finds gaps, Implementors get another pass before final review, and the living spec updates to reflect what was actually built.

What the Verifier Misses

The Verifier checks structural compliance against the spec. Runtime behavior and security analysis sit outside its scope. The failure classes that consistently slip through:

  • Race conditions in concurrent paths, like token refresh during revocation list updates.
  • Performance regressions that don't violate any spec assertion, like an N+1 query inside a new middleware that the spec didn't bound.
  • Security gaps the spec didn't name. A spec without "constant-time comparison" produces code where the Verifier won't catch a timing leak.
  • Unhandled error paths when the spec only describes the happy path.

The practical implication: the spec needs explicit assertions for the failure modes that matter. A property absent from the spec stays absent from the Verifier's checks.

Step 5: Review Diffs, Auto-Commit, and Create PR

After the Verifier signs off, the workflow returns to the developer for the final review stage. Intent consolidates diff viewing, git operations, and PR creation into one workspace.

Human Checkpoint 3: Final Diff Review

The third human checkpoint is the final diff review before any code reaches the repository. The built-in code editor with diff viewer and inline git integration covers the core inspection workflow inside Intent.

What Intent Doesn't Do (Yet)

A few real friction points show up during diff review:

  • No LSP integration. Go-to-definition, type-error highlighting, and compiler warnings are not available in the built-in editor. For typed languages like TypeScript, Rust, or Go, keep a separate IDE open during Checkpoint 3.
  • No inline test runner. Tests run via the terminal panel; no UI exists for "run this test" from the diff view.
  • No integrated debugger. Stepping through code requires the external IDE.
Open source
augmentcode/augment-swebench-agent869
Star on GitHub

The current product position is a workspace built for orchestration and review, with code-level verification handled by an existing IDE alongside it.

Auto-commit, combined with resumable sessions, persists workspace state across sessions. PR, merge, and push buttons render reliably in the current 0.3.x release line, and the Augment Code changelog tracks the latest fixes and improvements.

Step 6: Auto-Filled PR Description and Merge

Intent generates PR descriptions automatically after review completion, summarizing what changed and how. These descriptions update on subsequent commits, so reviewers can focus on why rather than what. The product page illustrates the outcome with two merged PRs (PR #142 merged → main and PR #143 merged → main), each corresponding to a parallel agent's completed work.

With the merge complete, the natural next question is how long the whole loop takes in practice.

Session Duration and What Drives It

Based on the workflow shape, a well-scoped, PR-sized feature typically runs 20-45 minutes end-to-end once past the first 2-3 sessions. The first session usually takes closer to 60-90 minutes as the review checkpoints and spec editing flow become familiar. Complex multi-service changes with extensive spec revisions extend the session because each round of spec edits triggers updated plans and additional Implementor passes.

Three factors drive the variance:

  • Task complexity and file count. A 4-file change finishes in one wave; a 20-file change may need two or three.
  • Model routing. Haiku 4.5 completes well-scoped tasks faster than Opus 4.7 in several benchmarks, so routing well-scoped Implementors to Haiku reduces both time and credit spend.
  • Time spent at the three checkpoints. This is usually the largest single variable, and spec edits at Checkpoint 1 are the highest-impact minutes in the session.

For reference, the Intent live demo by Sam Breed built a website from a Figma comp in just over an hour, though that scope (greenfield, design-driven) differs from the JWT middleware example used here.

When Intent Fits (and When It Does Not)

Intent's multi-agent orchestration is built for PR-sized, multi-file features where coordination through a shared living spec reduces conflicts and specification drift during parallel work. Sam Breed, in the live demo, said Intent "really shines on tasks of the size right now of like a PR," and that very ambitious PRs changing hundreds of files are harder to evaluate.

Fit by Task Shape

Task TypeFile CountIntent FitWhy
Multi-file feature (JWT middleware, billing integration)3-15 files, 1-3 servicesStrongSweet spot for parallel agents and spec alignment
Cross-service refactor15-40 filesStrong with careLiving spec propagates changes; spec curation matters more
Greenfield feature with clear requirementsAnyStrongCoordinator drafts cleanly when no legacy constraints exist
Single-file bug fix1-2 filesWeakCoordination overhead exceeds the task scope
Production hotfix under time pressureAnyWeakThree checkpoints add latency a hotfix can't absorb
Exploratory prototypingAnyModerateSpec-driven approach can constrain early exploration
Sprawling refactor50+ filesWeakCoordinator and Verifier struggle to evaluate the full surface

Tradeoffs Beyond Task Shape

A few non-task factors tilt the decision:

  • Time pressure. The three checkpoints add latency. A hotfix that needs to ship in 15 minutes is faster in a regular IDE.
  • Typed-language workflows. Without LSP in Intent, TypeScript-heavy or Rust-heavy reviews benefit from keeping an IDE open in parallel, which adds context-switching overhead.
  • Solo vs. team. The coordination value of a living spec compounds with team size. For a single developer making a small change, the spec overhead may not justify itself.

For developers evaluating the orchestration layer before committing to a paid plan, BYOA mode supports Intent's spec-driven workflow, agent orchestration, git worktree isolation, and resumable sessions with an existing Claude Code, Codex, or OpenCode subscription. The Context Engine and native Auggie specialist agents are included with paid Augment Code subscriptions.

Run One PR-Sized Feature This Week

Pick a feature that would take 4-6 hours of solo work and run it through Intent end-to-end. Spend real time editing the spec at Checkpoint 1: tighten file scope, resolve every open question, and add the negative test cases the Coordinator missed. Developers most often skip that checkpoint, and it's where most rework cost gets created downstream.

Once the PR merges, compare the spec from Checkpoint 1 against the diffs at Checkpoint 3. The delta between them is the most honest measure of whether Intent paid off for that task, and the clearest signal of where to invest more spec effort next time.

See how Intent's Coordinator drafts specs and orchestrates parallel agents into merge-ready pull requests.

Build with Intent

Free tier available · VS Code extension · Takes 2 minutes

FAQ

Written by

Ani Galstian

Ani Galstian

Ani writes about enterprise-scale AI coding tool evaluation, agentic development security, and the operational patterns that make AI agents reliable in production. His guides cover topics like AGENTS.md context files, spec-as-source-of-truth workflows, and how engineering teams should assess AI coding tools across dimensions like auditability and security compliance

Get Started

Give your codebase the agents it deserves

Install Augment to get started. Works with codebases of any size, from side projects to enterprise monorepos.