Intent is a developer workspace: it coordinates agents through living specs on cross-service features, with governance happening before agents write code. JetBrains Central is an enterprise control plane: it governs agent costs, policy compliance, and audit trails across an organization. They operate at different layers and are more complementary than competing.
TL;DR
Intent orchestrates multiple agents through living specs so developers can build features across services. JetBrains Central is positioned around cost, policy, and audit across an organization's agent activity. The deciding factor is governance timing: Intent governs before agents write code through spec approval; Central governs during and after execution through cost visibility, policy enforcement, and audit controls.
See how Intent's living specs keep parallel agents aligned across cross-service refactors.
Free tier available · VS Code extension · Takes 2 minutes
Why These Products Address Different Failure Modes
I spent the last several weeks evaluating both tools: Intent hands-on, and Central through its public documentation because it remains in closed EAP. The more time I spent with each, the clearer the distinction became: these products address different failure modes in agentic development.
The failure mode Intent addresses is misaligned implementation. I hit this firsthand before using Intent: three agents working on a cross-service feature each produced code that compiled and passed unit tests individually, but the API contracts between services didn't match. One agent used camelCase field names while another used snake_case for the same data model. The auth service expected a JWT in the request header; the gateway agent put it in the body.
Each agent's output looked correct in isolation. Integration revealed hours of rework. That coordination gap, where agents lack shared context about what the other agents are building, is the problem Intent's living spec and coordinator agent close.
The failure mode JetBrains Central addresses is organizational visibility. When developers across a 50-person engineering team each run agents that consume cloud resources, make API calls, and modify codebases, leadership needs visibility into what's happening, what it costs, and whether it complies with policy. Central is built around governance dashboards, cost attribution, and audit controls.
These failure modes surface at different points in the development lifecycle. Misaligned implementation hits during feature development and costs individual teams hours of debugging. Organizational visibility gaps hit during budget reviews and compliance audits and cost engineering leadership predictability.
JetBrains Central: The Enterprise Control Plane
JetBrains Central launched as what JetBrains calls "the control and execution plane for agent-driven software production" in a March 2026 announcement. The announcement describes a layered system connecting developer tools, AI agents, and development infrastructure with visibility into results, costs, and performance.
Central operates across three capability layers:
- Governance and control: Policy enforcement, identity/access management, observability, auditability, and cost attribution for agent-driven work
- Agent execution infrastructure: Cloud agent runtimes and computation provisioning so agents run reliably across environments
- Agent optimization and context: A semantic layer, described as under construction, that aggregates information from code, architecture, runtime behavior, and organizational knowledge
Some parts of Central are already in preview. The Central Console is live for organizations with a JetBrains AI subscription, tracking active AI users, credit consumption, and monthly limit usage. JetBrains Air, the agent execution layer, launched in public preview in March 2026. Air is a free macOS desktop application for delegating coding tasks to multiple AI agents running concurrently; it supports OpenAI Codex, Claude Agent, Gemini CLI, and Junie out of the box. Developers can access Air with a JetBrains AI subscription or by bringing their own API keys from Anthropic, OpenAI, or Google; using Junie specifically requires a JetBrains AI subscription. The roadmap includes additional centralized capabilities described in the Central roadmap.
Reading through Central's public documentation, the product's ambition is broad but the current deliverable is uneven. The Console provides cost visibility and Air provides agent execution, but the full governance stack (policy enforcement, audit controls, centralized BYOK management) and the semantic layer remain in development. For teams evaluating Central today, the practical question is whether credit tracking, user analytics, and Air's agent execution justify joining the EAP, or whether waiting until the governance and semantic layers are closer to complete makes more sense.
Current status: Central's full platform is in closed EAP in Q2 2026, limited to design partners selected by industry, team size, and existing JetBrains customer status. The Console and Air are available separately in public preview. No published pricing for Central, and no disclosed GA date.
Intent: The Developer Workspace for Agent Orchestration
Intent is a standalone macOS desktop application for spec-driven development and multi-agent orchestration.
Where Central focuses on what agents are doing across an organization, Intent focuses on how to coordinate multiple agents on a single feature.
The Intent documentation describes a structured sequence: a Coordinator Agent analyzes the codebase, drafts a living spec, generates tasks, and delegates to specialist agents.

The team can review and edit the generated task plan before any code is written. Intent uses a coordinator/specialist/verifier orchestration model and supports parallel agents in isolated git worktrees. A Verifier Agent checks results against the spec before presenting output.
The living spec is the source of truth and updates to reflect reality as work progresses. When an agent completes work, the spec updates to reflect what was actually built. When requirements change, those updates propagate so subsequent agent tasks operate from the current state.
When I tested Intent on a cross-service API feature, the Coordinator decomposed the work into discrete tasks with explicit acceptance criteria: route definitions, input validation schemas, error handling patterns, timeout constraints, and coverage targets. Each specialist agent received a contract to work against, not a conversational prompt. The Verifier then checked output against those contracts before surfacing a spec-compliance report.
Current status: Public beta, macOS only. Windows has an open waitlist; Linux support has not been announced. Uses existing Augment credits with no separate Intent pricing.
See how Intent's living specs keep parallel agents aligned across cross-service refactors.
Free tier available · VS Code extension · Takes 2 minutes
in src/utils/helpers.ts:42
How Governance Works in Each Tool
Intent and JetBrains Central both involve governance, but they govern different things at different times. The table below captures the structural differences:
| Dimension | Intent | JetBrains Central |
|---|---|---|
| Governance timing | Pre-execution: developer approves spec before agents write code | Runtime/organizational: policy enforcement, cost visibility during and after agent execution |
| Governance mechanism | Spec-to-diff traceability; verification artifacts | Policy enforcement, security controls, auditability, cost management |
| Primary user | Individual developer or small team building a feature | Engineering leadership managing agent activity across an organization |
| What it prevents | Misaligned implementation; agents building the wrong thing | Cost overruns; policy violations; audit gaps |
| Control surface | The spec: a structured, evolving document | The console: dashboards, analytics, and access controls |
The architectural distinction behind this table matters more than the table itself. Intent's living spec is a shared artifact that agents read from and write to: a contract that evolves with the implementation. JetBrains Central's semantic layer aggregates context from code, architecture, and runtime behavior to inform agents, but doesn't function as a contract agents are held to. The living spec says "build exactly this"; the semantic layer says "here's what you should know."
That difference has a practical implication: a team can define implementation contracts through Intent's specs and still use Central's console for organizational cost and compliance visibility. These governance models don't overlap; they layer.
Living Specs vs. Semantic Context
"Spec support" means something different in each product.
Intent specs are structured task contracts: role, task, constraints, and acceptance criteria. The Coordinator Agent starts from your task or spec, and the specs auto-update as agents complete work. A concrete example:
This is a contract the Implement agent works against and the Verify agent checks against. When the implementation deviates, the spec updates to reflect reality, and the Verifier flags mismatches.
One limitation worth noting: the spec can update to match an incorrect implementation, so the Verify agent and developer review still matter. That keeps the workflow grounded in review rather than treating the spec itself as independent validation.
JetBrains Central does not have a spec authoring workflow in the materials cited here. Its semantic layer provides persistent shared context that agents draw on, but this operates differently from an author-a-spec-then-execute model. If you need agents to work against explicit acceptance criteria with automated verification, Central doesn't provide that mechanism today. You'd need to pair Central with a separate spec workflow.
JetBrains Junie supports a spec-driven approach where the developer manually writes requirements, refines them into a plan, and breaks the plan into tasks. The developer performs the decomposition; Junie does not have a coordinator agent that drafts specs from codebase analysis. For small, single-service tasks, that manual approach works. For cross-service features where three agents need aligned contracts, the manual overhead scales with the number of agents and services involved.
JetBrains AI Assistant also supports Project Rules: project-specific instructions that guide AI responses, such as coding style or framework constraints. These function as standing preferences rather than feature-specific acceptance criteria.
Here's how spec handling compares across the two products:
| Spec Dimension | Intent | JetBrains Central / Junie |
|---|---|---|
| Spec authoring | Coordinator agent drafts from codebase analysis | Developer writes requirements manually in Junie; no spec system in Central |
| Spec lifecycle | Auto-evolves throughout implementation | Junie's public materials do not characterize plans/specs as static or requiring manual reconciliation if implementation diverges |
| Verification | Verifier agent checks results against spec | Developer reviews at each step |
| Named format support | Not documented in public sources here | Not documented in public sources here |
| Multi-agent coordination | Explicit: coordinator → specialists → verifier | Junie uses developer-managed task breakdown; delegation appears in JetBrains materials |
Agent Flexibility: BYOA vs. Open Agent Model
Both products support external agents, but the mechanisms and tradeoffs differ.
Intent's BYOA model supports Augment's native Auggie alongside external agents: Claude Code, OpenAI Codex, and OpenCode. Developers with existing subscriptions to those tools can use them directly in Intent without an Augment subscription.
The tradeoff I found in testing matters for purchasing decisions. When using external agents in Intent, the coordinator/specialist/verifier orchestration model still works: your external agent receives the task contract from the spec and the Verifier checks its output. What you lose without an Augment subscription is the Context Engine layer, which provides semantic context across 400,000+ files through dependency graph analysis. External agents receive the spec contract but lack deep architectural context about your codebase. For greenfield features or small repositories, that gap may not matter. For cross-service refactors in large codebases, the Context Engine's dependency awareness is where Intent's coordination is strongest.
JetBrains supports external agents through the ACP protocol, an open protocol co-developed with Zed that operates via JSON-RPC over stdio. Named supported agents in the materials cited here include Claude Agent, OpenAI Codex, and Gemini CLI through the ACP Registry. JetBrains Air also supports these agents natively, running them concurrently in a single workspace. The clearest evidence of Central's openness is Cursor running inside JetBrains IDEs through the ACP Registry, though I couldn't test that integration directly since Central remains in closed EAP.
I could not verify specific support for Bring Your Own Key providers or centralized BYOK management from official JetBrains materials.
| Agent Dimension | Intent | JetBrains Central |
|---|---|---|
| External agent support | Claude Code, OpenAI Codex, OpenCode | Claude Agent, OpenAI Codex, Gemini CLI, custom agents |
| Interoperability | BYOA within Intent workspace | Agent Client Protocol (ACP) |
| What external agents retain | Spec contracts, verification, orchestration | Governance visibility, cost tracking |
| What external agents lose | Context Engine semantic analysis (requires Augment subscription) | Unknown; Central is in closed EAP |
| Model selection | Multiple Augment-supported model options | Automatic model selection optimized for performance and cost |
Explore how Intent's Context Engine provides semantic understanding across 400,000+ files, giving every agent architectural awareness of your codebase. Build with Intent →
Pricing and Availability
This is where asymmetry matters most for teams making purchasing decisions today.
Intent uses Augment Code's credit-based pricing from the pricing page. Auto top-up runs $15 per 24,000 credits. Enterprise includes CMEK, ISO 42001, and SOC 2 Type II compliance.
| Plan | Monthly Cost | Credits/Month | Seats |
|---|---|---|---|
| Indie | $20 | 40,000 | Up to 1 |
| Standard | $60/dev | 130,000 | Up to 20 |
| Max | $200/dev | 450,000 | Up to 20 |
| Enterprise | Custom | Custom | Unlimited |
JetBrains Central pricing has not been published. The official announcement states only that "teams will be able to scale AI usage up or down" with updated organizational pricing coming soon. Whether Central will use the existing JetBrains AI credit model, bundle with the All Products Pack, or introduce entirely new pricing is unknown.
JetBrains Air is free to download during public preview. Developers can connect models in two ways: with a JetBrains AI subscription (AI Pro or AI Ultimate), which includes all agents, or through BYOK with their own API keys from Anthropic, OpenAI, or Google. When both are configured, Air uses BYOK keys first and falls back to the JetBrains subscription. Junie requires a JetBrains AI subscription; the other agents (OpenAI Codex, Claude Agent, Gemini CLI) work with BYOK alone.
| Availability | Intent | JetBrains Central |
|---|---|---|
| Status | Public beta | Full platform in closed EAP; Console and Air in public preview |
| Access | Download on macOS | Console: JetBrains AI subscription; Air: free download (BYOK or JetBrains AI subscription); full Central: design partner program |
| Platform | macOS only; Windows waitlist open | Air: macOS only (Windows/Linux planned); Console: cloud-based |
| Can you evaluate it today? | Yes | Console and Air: yes; full Central: only design partners |
| Published pricing | Yes | No (Central); Air free during preview |
| GA date | Not confirmed | Not disclosed |
Teams on Windows or Linux face constraints with both products: they cannot evaluate Intent or Air today, though Intent has a Windows waitlist open and JetBrains has stated Windows and Linux versions of Air are coming.
When to Use Each (or Both)
The decision depends on which problem is more urgent for your team and how many agents you're running.
Choose Intent when your team runs 2+ agents on features that span services or repositories and you're spending hours each week reconciling their output. In my testing, the coordination overhead became noticeable once I had agents working on both sides of an API boundary: one building the endpoint, another consuming it, and both needing to agree on request/response schemas, error codes, and auth patterns. The living spec workflow collapses that reconciliation into the spec itself. Intent works best for teams of 1-5 developers working on cross-service features where the spec contract prevents the integration mismatches I described earlier.
Choose JetBrains Central when your organization has 20+ developers running agents and your engineering leadership needs visibility into what those agents cost and whether they comply with organizational policy. If agent compute spending is growing without centralized tracking or audit trails, Central addresses that governance gap. Central is built for engineering leaders who need to understand and control the organizational cost of agentic development. Teams can start evaluating the Console and Air components today, even while Central's full governance stack remains in EAP.
The "use both" scenario is partially available today. A team could use Intent for spec-driven agent orchestration while using the JetBrains Console for credit tracking and Air for running additional agents outside the Intent workflow. The full governance layer (policy enforcement, audit controls, centralized BYOK) requires waiting for Central's broader EAP to ship, and the timeline is undisclosed.
What Teams Should Do Now
After weeks of evaluating both tools, my recommendation comes down to timeline and urgency.
If your team is losing hours to agent coordination problems today, especially on cross-service features where misaligned contracts between agents cause integration rework, Intent addresses that problem and is available to test now. The coordination layer is the one you can act on immediately.
If your organization's primary concern is agent cost visibility and compliance, the governance tooling you need is still maturing. The JetBrains Console provides basic credit and usage tracking today, and Air provides agent execution in public preview. Central's full governance stack, including policy enforcement, audit controls, and the semantic layer, doesn't have a public timeline. My advice: start building governance practices (cost tracking, usage policies, audit processes) with whatever tools you have now. Whichever platform ships comprehensive governance first will be easier to adopt if your team already has governance habits in place.
Both products point toward the same question: how will teams manage agentic workflows as they scale? Coordination and governance will both be table stakes within the next year. Right now, the coordination layer is the one you can evaluate hands-on.
Intent's living specs keep parallel agents aligned as your plan evolves, eliminating manual reconciliation across services.
Free tier available · VS Code extension · Takes 2 minutes
FAQ
Related
Written by

Paula Hingel
Developer Evangelist