
Rebuilding state management: How we made our VS Code extension 2× faster
December 18, 2025
When your VS Code extension starts crashing on long conversations, you have two choices: patch the symptoms or fix the foundation. We chose the latter, rebuilding our state management system from the ground up using Redux and Redux-Saga.
The result: up to 2x faster chat inference on complex workflows, long threads that load in seconds instead of crashing, and a debugging experience that went from days to minutes.
This is the story of how we got there, and what we learned about building stateful extensions at scale.
The breaking point: when implicit state becomes technical debt
The symptoms started small. Users reported occasional UI freezes. Long conversations would slow down, then crash VS Code entirely. Race conditions appeared in bug reports—the kind that are impossible to reproduce, let alone debug.
As our extension grew more sophisticated, adding agent workflows, supporting parallel tool calls and the need to parallelize conversations, the cracks widened. We were hitting fundamental limits of our architecture:
Implicit state ownership. Reactive state works well in smaller scopes. Beyond components, its complexity grows exponentially. Multiple components may claim responsibility for the same data. When state changed, we couldn't predict which component would react, or in what order.
Hidden lifecycle side effects. State updates triggered cascading side effects buried in component lifecycle methods. A single change could ripple through dozens of components in unpredictable ways.
Inconsistent async handling. Complex async side-effects orchestration was done using quite primitive tools on top of reactive state. Different parts of the codebase handled asynchronous operations differently. Some used promises, others callbacks, still others relied on event emitters. Race conditions were inevitable.
The result? Memory leaks we couldn't reliably fix. CPU spikes that correlated with conversation length. And worst of all: we couldn't build the features we wanted. True parallel tool orchestration, multi-threaded conversations were architecturally impossible.
To understand the scope of the problem, we used Augment's agent to analyze three months of bug-related pull requests. The prompt was simple:
The agent's verdict: over 70% traced back to state management issues.
Design principles: what Redux gave us
We needed a complete architectural reset. After prototyping several approaches with the agent, we landed on a Redux-based solution with clear constraints.
Single source of truth. All application state lives in one store with a typed, versioned schema. No more hunting through component trees to find where state actually lives.
Transactional updates. State changes happen through dispatched actions—discrete, named events that describe what happened. Every mutation is explicit and logged.
Normalized storage. We model conversations, exchanges, and tool calls as normalized entities with relationships, not nested objects. This eliminates data duplication and makes updates predictable.
Selector-based derived state. Instead of maintaining ad-hoc caches scattered across components, we compute derived state through selectors—pure functions that transform store data. Selectors are memoized and only recompute when their inputs change.
Explicit lifecycle management. Persistence and hydration happen through explicit actions with clear success/failure paths. No more mystery about when state is saved or loaded.
This architecture gave us something we didn't have before: observability. Every state change is triggered by an action. Every action is logged. We can reconstruct the exact sequence of events that led to any bug.
The Svelte problem: building the missing bridge
Redux is a solved problem in React. Libraries like React-Redux provide battle-tested integration patterns. Svelte? Not so much.
We needed Redux's architectural benefits, but we also needed Svelte's reactive performance. The two systems have fundamentally different mental models:
- Redux is explicit: state changes through dispatched actions
- Svelte is implicit: state changes reactively when values update
Existing Svelte state management solutions (stores, contexts) didn't give us the transactional control or observability we needed. We needed Redux. But connecting Redux to Svelte components efficiently required building something new.
Creating performant selectors
The core challenge: Svelte components expect readables—observable objects that notify subscribers when values change. Redux provides a store with a subscription API, but accessing state requires explicit getState() calls.
We built createSelector(), a function that bridges the gap:
Under the hood, createSelector() does several things:
Returns readables, not values. The function returns a Svelte readable that tracks the selector's output. When the output changes, subscribers are notified automatically.
Lazy evaluation. If a component isn't actively using a selector's value, the selector doesn't run. Svelte's reactive system naturally handles this—if $conversation isn't referenced in the component's template or logic, Svelte doesn't subscribe to the readable.
Surgical state tracking. We track which parts of the state tree each selector actually touches during execution. When state changes, we only recompute selectors that depend on the changed slice. A change to state.ui.theme doesn't trigger selectors that only read state.conversations.
Advanced caching. Selectors cache their results based on input equality. If the relevant slice of state hasn't changed (by reference), we return the cached result without recomputing.
This solves Redux's classic performance problem: in React, all selectors connected to components typically run on every state change. With our Svelte integration, we achieved something better—selectors only run when their inputs change and when their outputs are actively being used.
The performance impact was dramatic. We can now connect dozens of components to dozens of selectors without degradation.
Orchestrating async logic with Redux-saga
State management is only half the problem. The other half is async orchestration—managing the complex workflows that happen when users interact with an AI agent.
Consider what happens when a user sends a message:
- Create a new exchange
- Stream the LLM response
- Parse tool calls from the response
- Execute tools in parallel
- Stream results back to the LLM
- Update UI throughout
Each step can fail. Each step needs to update state. Steps can happen in parallel or in sequence. Race conditions lurk everywhere.
We use Redux-Saga (based on this paper from Garcia-Molina and Salem) to manage this complexity.
We chose sagas carefully. Sagas are both very simple and very powerful, and that combination allows us to manage complicated asynchronous side effects with ease. At the same time, teams are often wary of using them, with good reason: you have to be very intentional about how you set up your infrastructure to get the benefit of using them.
Sagas are generator functions that orchestrate async operations using a declarative API:
Sagas make async workflows explicit and testable. They also make parallel workflows safe. Before this refactor, many features were architecturally impossible—too many race conditions, too little control. With sagas, we can spawn parallel tasks, wait for their completion, handle errors, and coordinate state updates without conflict.
The agent loop, the core orchestration logic that manages conversation flow, was rewritten entirely using sagas. This unlocked features that were previously impossible: true parallel tool calls, multi-threaded conversations, and reliable cancellation.
The refactoring process: agent-assisted at scale
The vision was clear. The implementation? Daunting.
We weren't just adding a library. We were replacing the fundamental architecture of a production extension with thousands of active users. Every component needed updating. Every async workflow needed rewriting. One mistake could ship widespread regressions.
This is where the agent became critical.
Documentation as agent context
We wrote comprehensive documentation for the new Redux architecture—not just for humans, but specifically structured for agent consumption. The key insight: documentation should be navigable and scoped.
We created a `_docs/` directory pattern:
The README.md acts as an index—short, scannable titles that help the agent identify which document to read. When the agent works in the /redux/ directory, our rules automatically point it to _docs/README.md. The agent scans titles, identifies relevant documents, and ingests only what it needs.
This prevents context window bloat while ensuring the agent always has the right information. And because our context engine indexes these docs, even without explicit rules, the agent can pull relevant sections based on the task.
Porting logic with agent assistance
The old codebase had dozens of components with complex async logic—promise chains, event handlers, lifecycle methods managing timers and subscriptions. All of it needed porting to the new Redux+Saga architecture.
The agent's role wasn't to blindly rewrite code. It was to follow patterns we'd established and apply them consistently across the codebase:
Identify state dependencies: "Find all places where this component reads or writes state."
Extract to selectors: "Create selectors for these state accesses following the patterns in /redux/_docs/selectors.md."
Port async to sagas: "Rewrite this promise chain as a saga using the patterns in /redux/_docs/sagas.md."
The documentation made the agent's output predictable. Instead of generating arbitrary solutions, it followed established patterns. This meant less review time and more confidence in the changes.
The big bang PR
At some point, incremental migration wasn't enough. The core agent loop (the orchestration logic managing conversation flow) couldn't be migrated piecemeal. It needed a complete rewrite.
This meant a single, massive pull request touching hundreds of files. Testing looked like a nightmare.
With the agent, it took a week.
We wrote comprehensive test coverage with agent assistance. The agent generated test cases based on the old behavior, then verified the new implementation matched. When tests failed, we debugged together. The agent could trace through saga execution flow and identify exactly where behavior diverged.
The agent's observability helped here too. Because every state change was an action, we could log the exact sequence of actions in both old and new implementations and compare them side-by-side.
Results: measuring the impact
The numbers speak for themselves:
Long threads that crashed VS Code now load in seconds. We tested with conversations containing hundreds of exchanges and thousands of tool calls. The old architecture would freeze the editor, eventually crashing it. The new architecture loads instantly and remains responsive.
Tool-heavy workflows are 1.2–2× faster. For threads with significant parallel tool execution, we measured execution time from user message to complete response. The new saga-based orchestration with proper parallelization cut times nearly in half.
UI freezes are gone. The old architecture would block the main thread during state updates. With batched transactional updates and selective component re-rendering, the UI stays responsive even during complex agent workflows.
But the biggest wins are qualitative:
Debugging went from days to minutes. Before Redux, tracking down a race condition could take weeks. We'd add logging, try to reproduce the bug, fail, add more logging, repeat. Now? We look at the action log. We see the exact sequence of state changes. We find the bug immediately.
Engineers report using the agent to fix bugs in minutes by simply describing the issue and pointing it to the Redux state. The agent reads the action log, identifies the problematic state transition, and generates a fix.
We can build features that were architecturally impossible. Parallel conversations, where multiple agent threads run simultaneously, now work reliably. Users can start one agent task, spawn another, and work in both without conflicts. This was completely impossible before.
Memory leaks are eliminated. The explicit lifecycle management means subscriptions are properly cleaned up. Components are properly disposed. We can track exactly what's holding references to what.

Before refactor: Importing a ~1.5mb JSON document blocks the UI for about 4.5 seconds + 2 seconds to finish the task

After refactor: Same document blocks UI for 1 second + <1 second to finish the task
Lessons for extension developers
If you're building a complex VS Code extension, here's what we learned:
1. Watch for these warning signs
- Long-lived sessions degrade performance: If your extension gets slower the longer it runs, you probably have state management issues
- Bugs are "impossible to reproduce": Race conditions are usually state management problems in disguise
- You're afraid to build features: If new features feel risky because of unpredictable interactions, your architecture has implicit coupling
- Debugging requires hours of logging: If you can't understand state flow without extensive instrumentation, you need better observability
2. Redux's "overhead" is an investment
Yes, Redux is verbose. Yes, you write more boilerplate. But that boilerplate is explicit state changes. In a complex extension with async workflows, long-lived state, and unpredictable user interactions, that explicitness becomes invaluable.
The action log alone is worth the price of admission. When a user reports a bug, they can send you their action log. You can replay it. You can see exactly what happened.
3. Observability is not optional
You cannot debug what you cannot observe. In a stateful, async extension, implicit state changes are bugs waiting to happen. Make every state change explicit and logged. Your future self will thank you.
4. Agent-assisted refactoring needs structure
The agent is incredibly powerful for large-scale refactoring, but it needs guidance. Documentation structured for agent consumption—scannable indexes, focused documents, clear patterns—makes the difference between "the agent generated garbage" and "the agent ported 80% of the code correctly."
The _docs/ pattern worked brilliantly for us. When the agent enters a directory, it automatically knows where to look for context. The documentation serves both humans and AI.
5. Build bridges between ecosystems
Don't let framework limitations stop you from using the right tool. We needed Redux's architecture but Svelte's performance. The solution wasn't compromise—it was building the integration layer ourselves.
We're planning to open-source our Svelte-Redux connector. If you're facing the same problem, you shouldn't have to solve it twice.
What's next
The refactor unlocked architectural headroom we didn't have before. We're now working on:
UI improvements leveraging the new foundation. With guaranteed performance, we can build richer interfaces: more sophisticated rendering for different file types, real-time collaboration features, advanced visualization of agent workflows.
More complex orchestration. Multi-agent workflows where different agents specialize in different tasks. True concurrent editing where agents work in separate directories without conflicts.
Better debugging tools. We can now build dev tools that visualize state changes, replay actions, and time-travel debug. Think Redux DevTools, but for VS Code extensions.
The state management refactor wasn't just a performance win. It was an architectural reset that makes everything we build next easier, faster, and more reliable.

Dmitry Kharchenko
Senior Software Engineer
Dmitry Kharchenko is a Senior Software Engineer at Augment Code with over 14 years of experience in front-end development. He has built scalable architectures for complex single-page applications across companies including Descript, Sigma Computing, and Holloway, where he served as Tech Lead for over six years. His technical expertise spans React, TypeScript, Node.js, and GraphQL, with deep experience solving challenging problems from client-side natural language processing to real-time application architecture. Dmitry holds a Master's degree in Mathematics and Computer Science from Murmansk State Arctic University.