Skip to content
Install
Back to Tools

JetBrains Central vs Intent (2026): Control Plane or Dev Workspace

Apr 12, 2026
Paula Hingel
Paula Hingel
JetBrains Central vs Intent (2026): Control Plane or Dev Workspace

Intent is a developer workspace: it coordinates agents through living specs on cross-service features, with governance happening before agents write code. JetBrains Central is an enterprise control plane: it governs agent costs, policy compliance, and audit trails across an organization. They operate at different layers and are more complementary than competing.

TL;DR

Intent orchestrates multiple agents through living specs so developers can build features across services. JetBrains Central is positioned around cost, policy, and audit across an organization's agent activity. The deciding factor is governance timing: Intent governs before agents write code through spec approval; Central governs during and after execution through cost visibility, policy enforcement, and audit controls.

See how Intent's living specs keep parallel agents aligned across cross-service refactors.

Build with Intent →

Free tier available · VS Code extension · Takes 2 minutes

Why These Products Address Different Failure Modes

I spent the last several weeks evaluating both tools: Intent hands-on, and Central through its public documentation because it remains in closed EAP. The more time I spent with each, the clearer the distinction became: these products address different failure modes in agentic development.

The failure mode Intent addresses is misaligned implementation. I hit this firsthand before using Intent: three agents working on a cross-service feature each produced code that compiled and passed unit tests individually, but the API contracts between services didn't match. One agent used camelCase field names while another used snake_case for the same data model. The auth service expected a JWT in the request header; the gateway agent put it in the body.

Each agent's output looked correct in isolation. Integration revealed hours of rework. That coordination gap, where agents lack shared context about what the other agents are building, is the problem Intent's living spec and coordinator agent close.

The failure mode JetBrains Central addresses is organizational visibility. When developers across a 50-person engineering team each run agents that consume cloud resources, make API calls, and modify codebases, leadership needs visibility into what's happening, what it costs, and whether it complies with policy. Central is built around governance dashboards, cost attribution, and audit controls.

These failure modes surface at different points in the development lifecycle. Misaligned implementation hits during feature development and costs individual teams hours of debugging. Organizational visibility gaps hit during budget reviews and compliance audits and cost engineering leadership predictability.

JetBrains Central: The Enterprise Control Plane

JetBrains Central launched as what JetBrains calls "the control and execution plane for agent-driven software production" in a March 2026 announcement. The announcement describes a layered system connecting developer tools, AI agents, and development infrastructure with visibility into results, costs, and performance.

Central operates across three capability layers:

  • Governance and control: Policy enforcement, identity/access management, observability, auditability, and cost attribution for agent-driven work
  • Agent execution infrastructure: Cloud agent runtimes and computation provisioning so agents run reliably across environments
  • Agent optimization and context: A semantic layer, described as under construction, that aggregates information from code, architecture, runtime behavior, and organizational knowledge

Some parts of Central are already in preview. The Central Console is live for organizations with a JetBrains AI subscription, tracking active AI users, credit consumption, and monthly limit usage. JetBrains Air, the agent execution layer, launched in public preview in March 2026. Air is a free macOS desktop application for delegating coding tasks to multiple AI agents running concurrently; it supports OpenAI Codex, Claude Agent, Gemini CLI, and Junie out of the box. Developers can access Air with a JetBrains AI subscription or by bringing their own API keys from Anthropic, OpenAI, or Google; using Junie specifically requires a JetBrains AI subscription. The roadmap includes additional centralized capabilities described in the Central roadmap.

Reading through Central's public documentation, the product's ambition is broad but the current deliverable is uneven. The Console provides cost visibility and Air provides agent execution, but the full governance stack (policy enforcement, audit controls, centralized BYOK management) and the semantic layer remain in development. For teams evaluating Central today, the practical question is whether credit tracking, user analytics, and Air's agent execution justify joining the EAP, or whether waiting until the governance and semantic layers are closer to complete makes more sense.

Current status: Central's full platform is in closed EAP in Q2 2026, limited to design partners selected by industry, team size, and existing JetBrains customer status. The Console and Air are available separately in public preview. No published pricing for Central, and no disclosed GA date.

Intent: The Developer Workspace for Agent Orchestration

Intent is a standalone macOS desktop application for spec-driven development and multi-agent orchestration.

Where Central focuses on what agents are doing across an organization, Intent focuses on how to coordinate multiple agents on a single feature.

The Intent documentation describes a structured sequence: a Coordinator Agent analyzes the codebase, drafts a living spec, generates tasks, and delegates to specialist agents.

Post image

The team can review and edit the generated task plan before any code is written. Intent uses a coordinator/specialist/verifier orchestration model and supports parallel agents in isolated git worktrees. A Verifier Agent checks results against the spec before presenting output.

The living spec is the source of truth and updates to reflect reality as work progresses. When an agent completes work, the spec updates to reflect what was actually built. When requirements change, those updates propagate so subsequent agent tasks operate from the current state.

When I tested Intent on a cross-service API feature, the Coordinator decomposed the work into discrete tasks with explicit acceptance criteria: route definitions, input validation schemas, error handling patterns, timeout constraints, and coverage targets. Each specialist agent received a contract to work against, not a conversational prompt. The Verifier then checked output against those contracts before surfacing a spec-compliance report.

Current status: Public beta, macOS only. Windows has an open waitlist; Linux support has not been announced. Uses existing Augment credits with no separate Intent pricing.

See how Intent's living specs keep parallel agents aligned across cross-service refactors.

Build with Intent

Free tier available · VS Code extension · Takes 2 minutes

ci-pipeline
···
$ cat build.log | auggie --print --quiet \
"Summarize the failure"
Build failed due to missing dependency 'lodash'
in src/utils/helpers.ts:42
Fix: npm install lodash @types/lodash

How Governance Works in Each Tool

Intent and JetBrains Central both involve governance, but they govern different things at different times. The table below captures the structural differences:

DimensionIntentJetBrains Central
Governance timingPre-execution: developer approves spec before agents write codeRuntime/organizational: policy enforcement, cost visibility during and after agent execution
Governance mechanismSpec-to-diff traceability; verification artifactsPolicy enforcement, security controls, auditability, cost management
Primary userIndividual developer or small team building a featureEngineering leadership managing agent activity across an organization
What it preventsMisaligned implementation; agents building the wrong thingCost overruns; policy violations; audit gaps
Control surfaceThe spec: a structured, evolving documentThe console: dashboards, analytics, and access controls

The architectural distinction behind this table matters more than the table itself. Intent's living spec is a shared artifact that agents read from and write to: a contract that evolves with the implementation. JetBrains Central's semantic layer aggregates context from code, architecture, and runtime behavior to inform agents, but doesn't function as a contract agents are held to. The living spec says "build exactly this"; the semantic layer says "here's what you should know."

That difference has a practical implication: a team can define implementation contracts through Intent's specs and still use Central's console for organizational cost and compliance visibility. These governance models don't overlap; they layer.

Living Specs vs. Semantic Context

"Spec support" means something different in each product.

Intent specs are structured task contracts: role, task, constraints, and acceptance criteria. The Coordinator Agent starts from your task or spec, and the specs auto-update as agents complete work. A concrete example:

text
Role: Backend API Developer
Task: Implement GET /weather endpoint
- Route: /weather
- Input validation: zod schema for city parameter
- External call: fetch to weather service
- Error handling: ProblemDetails RFC 7807
Constraints:
- Include X-Request-Id in all logs
- 5-second timeout on external calls
- Cache results for 5 minutes
Acceptance:
- Unit tests pass with 80%+ coverage
- Integration test with mock weather service

This is a contract the Implement agent works against and the Verify agent checks against. When the implementation deviates, the spec updates to reflect reality, and the Verifier flags mismatches.

One limitation worth noting: the spec can update to match an incorrect implementation, so the Verify agent and developer review still matter. That keeps the workflow grounded in review rather than treating the spec itself as independent validation.

JetBrains Central does not have a spec authoring workflow in the materials cited here. Its semantic layer provides persistent shared context that agents draw on, but this operates differently from an author-a-spec-then-execute model. If you need agents to work against explicit acceptance criteria with automated verification, Central doesn't provide that mechanism today. You'd need to pair Central with a separate spec workflow.

JetBrains Junie supports a spec-driven approach where the developer manually writes requirements, refines them into a plan, and breaks the plan into tasks. The developer performs the decomposition; Junie does not have a coordinator agent that drafts specs from codebase analysis. For small, single-service tasks, that manual approach works. For cross-service features where three agents need aligned contracts, the manual overhead scales with the number of agents and services involved.

JetBrains AI Assistant also supports Project Rules: project-specific instructions that guide AI responses, such as coding style or framework constraints. These function as standing preferences rather than feature-specific acceptance criteria.

Here's how spec handling compares across the two products:

Spec DimensionIntentJetBrains Central / Junie
Spec authoringCoordinator agent drafts from codebase analysisDeveloper writes requirements manually in Junie; no spec system in Central
Spec lifecycleAuto-evolves throughout implementationJunie's public materials do not characterize plans/specs as static or requiring manual reconciliation if implementation diverges
VerificationVerifier agent checks results against specDeveloper reviews at each step
Named format supportNot documented in public sources hereNot documented in public sources here
Multi-agent coordinationExplicit: coordinator → specialists → verifierJunie uses developer-managed task breakdown; delegation appears in JetBrains materials

Agent Flexibility: BYOA vs. Open Agent Model

Both products support external agents, but the mechanisms and tradeoffs differ.

Intent's BYOA model supports Augment's native Auggie alongside external agents: Claude Code, OpenAI Codex, and OpenCode. Developers with existing subscriptions to those tools can use them directly in Intent without an Augment subscription.

The tradeoff I found in testing matters for purchasing decisions. When using external agents in Intent, the coordinator/specialist/verifier orchestration model still works: your external agent receives the task contract from the spec and the Verifier checks its output. What you lose without an Augment subscription is the Context Engine layer, which provides semantic context across 400,000+ files through dependency graph analysis. External agents receive the spec contract but lack deep architectural context about your codebase. For greenfield features or small repositories, that gap may not matter. For cross-service refactors in large codebases, the Context Engine's dependency awareness is where Intent's coordination is strongest.

JetBrains supports external agents through the ACP protocol, an open protocol co-developed with Zed that operates via JSON-RPC over stdio. Named supported agents in the materials cited here include Claude Agent, OpenAI Codex, and Gemini CLI through the ACP Registry. JetBrains Air also supports these agents natively, running them concurrently in a single workspace. The clearest evidence of Central's openness is Cursor running inside JetBrains IDEs through the ACP Registry, though I couldn't test that integration directly since Central remains in closed EAP.

I could not verify specific support for Bring Your Own Key providers or centralized BYOK management from official JetBrains materials.

Agent DimensionIntentJetBrains Central
External agent supportClaude Code, OpenAI Codex, OpenCodeClaude Agent, OpenAI Codex, Gemini CLI, custom agents
InteroperabilityBYOA within Intent workspaceAgent Client Protocol (ACP)
What external agents retainSpec contracts, verification, orchestrationGovernance visibility, cost tracking
What external agents loseContext Engine semantic analysis (requires Augment subscription)Unknown; Central is in closed EAP
Model selectionMultiple Augment-supported model optionsAutomatic model selection optimized for performance and cost

Explore how Intent's Context Engine provides semantic understanding across 400,000+ files, giving every agent architectural awareness of your codebase. Build with Intent →

Pricing and Availability

This is where asymmetry matters most for teams making purchasing decisions today.

Open source
augmentcode/augment.vim612
Star on GitHub

Intent uses Augment Code's credit-based pricing from the pricing page. Auto top-up runs $15 per 24,000 credits. Enterprise includes CMEK, ISO 42001, and SOC 2 Type II compliance.

PlanMonthly CostCredits/MonthSeats
Indie$2040,000Up to 1
Standard$60/dev130,000Up to 20
Max$200/dev450,000Up to 20
EnterpriseCustomCustomUnlimited

JetBrains Central pricing has not been published. The official announcement states only that "teams will be able to scale AI usage up or down" with updated organizational pricing coming soon. Whether Central will use the existing JetBrains AI credit model, bundle with the All Products Pack, or introduce entirely new pricing is unknown.

JetBrains Air is free to download during public preview. Developers can connect models in two ways: with a JetBrains AI subscription (AI Pro or AI Ultimate), which includes all agents, or through BYOK with their own API keys from Anthropic, OpenAI, or Google. When both are configured, Air uses BYOK keys first and falls back to the JetBrains subscription. Junie requires a JetBrains AI subscription; the other agents (OpenAI Codex, Claude Agent, Gemini CLI) work with BYOK alone.

AvailabilityIntentJetBrains Central
StatusPublic betaFull platform in closed EAP; Console and Air in public preview
AccessDownload on macOSConsole: JetBrains AI subscription; Air: free download (BYOK or JetBrains AI subscription); full Central: design partner program
PlatformmacOS only; Windows waitlist openAir: macOS only (Windows/Linux planned); Console: cloud-based
Can you evaluate it today?YesConsole and Air: yes; full Central: only design partners
Published pricingYesNo (Central); Air free during preview
GA dateNot confirmedNot disclosed

Teams on Windows or Linux face constraints with both products: they cannot evaluate Intent or Air today, though Intent has a Windows waitlist open and JetBrains has stated Windows and Linux versions of Air are coming.

When to Use Each (or Both)

The decision depends on which problem is more urgent for your team and how many agents you're running.

Choose Intent when your team runs 2+ agents on features that span services or repositories and you're spending hours each week reconciling their output. In my testing, the coordination overhead became noticeable once I had agents working on both sides of an API boundary: one building the endpoint, another consuming it, and both needing to agree on request/response schemas, error codes, and auth patterns. The living spec workflow collapses that reconciliation into the spec itself. Intent works best for teams of 1-5 developers working on cross-service features where the spec contract prevents the integration mismatches I described earlier.

Choose JetBrains Central when your organization has 20+ developers running agents and your engineering leadership needs visibility into what those agents cost and whether they comply with organizational policy. If agent compute spending is growing without centralized tracking or audit trails, Central addresses that governance gap. Central is built for engineering leaders who need to understand and control the organizational cost of agentic development. Teams can start evaluating the Console and Air components today, even while Central's full governance stack remains in EAP.

The "use both" scenario is partially available today. A team could use Intent for spec-driven agent orchestration while using the JetBrains Console for credit tracking and Air for running additional agents outside the Intent workflow. The full governance layer (policy enforcement, audit controls, centralized BYOK) requires waiting for Central's broader EAP to ship, and the timeline is undisclosed.

What Teams Should Do Now

After weeks of evaluating both tools, my recommendation comes down to timeline and urgency.

If your team is losing hours to agent coordination problems today, especially on cross-service features where misaligned contracts between agents cause integration rework, Intent addresses that problem and is available to test now. The coordination layer is the one you can act on immediately.

If your organization's primary concern is agent cost visibility and compliance, the governance tooling you need is still maturing. The JetBrains Console provides basic credit and usage tracking today, and Air provides agent execution in public preview. Central's full governance stack, including policy enforcement, audit controls, and the semantic layer, doesn't have a public timeline. My advice: start building governance practices (cost tracking, usage policies, audit processes) with whatever tools you have now. Whichever platform ships comprehensive governance first will be easier to adopt if your team already has governance habits in place.

Both products point toward the same question: how will teams manage agentic workflows as they scale? Coordination and governance will both be table stakes within the next year. Right now, the coordination layer is the one you can evaluate hands-on.

Intent's living specs keep parallel agents aligned as your plan evolves, eliminating manual reconciliation across services.

Build with Intent

Free tier available · VS Code extension · Takes 2 minutes

FAQ

Written by

Paula Hingel

Paula Hingel

Developer Evangelist

Get Started

Give your codebase the agents it deserves

Install Augment to get started. Works with codebases of any size, from side projects to enterprise monorepos.