October 10, 2025
Progression vs Regression Testing: Guide to Test Implementation

You're pushing code to production. Everything looks fine. Then at 2 AM, the payment system breaks. Not because of your changes, but because your changes broke something else entirely.
This happens more than anyone wants to admit. The question is: how do you catch these problems before they catch you?
Progression testing validates new or modified functionality, while regression testing ensures existing features remain intact after changes. Most teams get this wrong. They either test everything every time (slow) or test nothing systematically (disaster). There's a better way.
The Core Problem
Here's what actually happens in most engineering organizations. Someone adds a feature. They test the feature. It works. They ship it. Two days later, users can't log in. Or checkout breaks. Or the API starts returning 500s for requests that worked yesterday.
The new code didn't fail. The new code made something else fail.
This is the difference between progression and regression testing, though the testing industry doesn't make this particularly clear. According to the ISTQB Glossary, regression testing means "testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software."
That's the official definition. Here's what it actually means: you changed something. Did you break anything else?
Progression testing is simpler. You built something new. Does it work? The TMap methodology calls this "a test of new or adapted parts of a system."
The interesting part isn't the definitions. It's when to use each one.
When Tests Actually Matter
Think about what you're doing to your codebase. Are you adding something? Changing something? Fixing something?
If you're adding a new OAuth login flow, you need progression testing. Does the new thing work? Can users authenticate? Do tokens generate correctly? Does the refresh mechanism function?
But you also need to check: did adding OAuth break the existing email login? Did it mess up session management? Did it break the logout flow?
Most teams only do the first part. They test the new feature. They don't test whether the new feature broke old features. Then they wonder why production keeps catching fire.
Here's a simple rule: new code gets progression testing. Everything else gets regression testing.
The Real Differences
People treat these as the same thing with different names. They're not.
Progression testing happens during development. You're building a feature. You test it continuously. Does it meet requirements? Does it actually work? You're validating that you built the right thing correctly.
Regression testing happens before deployment. You've built something. Now you need to verify you didn't break anything. You're protecting existing functionality.
The timing matters. The scope matters. The goals are completely different.

The mistake most teams make: they skip regression testing because it seems redundant. You already tested the new code. Why test the old code?
Because your new code might have broken it. That's why.
What Progression Testing Looks Like
Say you're adding OAuth to a mobile app. You need to test the OAuth flow. Obviously. But what does that actually mean?
You test authentication. Does the user get redirected correctly? Does the OAuth provider respond? Do you capture the response?
You test token generation. Is the format correct? Does the token contain the right claims? Is it signed properly?
You test the integration. Can your app use the token? Does it fetch user data? Does it handle expired tokens?
Here's actual test code for this:
describe('OAuth Provider Integration', () => { test('completes authentication flow', async () => { const result = await oauthProvider.authenticate(credentials); expect(result.status).toBe('authenticated'); }); test('generates valid tokens', async () => { const token = await oauthProvider.getAccessToken(); expect(token).toMatchFormat(/^[A-Za-z0-9-_]+\.[A-Za-z0-9-_]+\.[A-Za-z0-9-_]+$/); }); test('handles token refresh', async () => { const newToken = await oauthProvider.refreshToken(expiredToken); expect(newToken).toBeDefined(); });});
This is progression testing. You're validating the new OAuth functionality works. Nothing more.
But here's what most teams miss: you also need to verify that adding OAuth didn't break email login. Or password reset. Or any other authentication method you support.
That's regression testing. Same system. Different purpose.
What Regression Testing Actually Protects
Payment systems are a good example. Everyone has payment processing. Nobody wants to break it.
Say you're deploying any change to your system. Doesn't matter what. Could be a new feature. Could be a bug fix. Could be a dependency update.
You need to verify: does payment processing still work?
describe('Payment Processing Validation', () => { test('processes valid payments', async () => { const result = await paymentService.processPayment(validCard, amount); expect(result.status).toBe('completed'); }); test('handles declined payments', async () => { const result = await paymentService.processPayment(declinedCard, amount); expect(result.status).toBe('declined'); }); test('processes refunds', async () => { const refund = await paymentService.processRefund(transactionId); expect(refund.status).toBe('refunded'); });});
These tests have nothing to do with your changes. They're testing old functionality. But you run them anyway, because you need to know if your changes broke something critical.
The interesting question: how much regression testing do you need?
Most teams go to extremes. Either they test everything (takes forever) or they test nothing (breaks production).
The right answer is risk-based. Test the critical paths. Test the high-traffic features. Test the things that generate revenue or handle sensitive data.
Netflix built a statistical framework for rapid regression detection. Not because they're paranoid. Because at Netflix scale, small failures have big impacts.
When Each Approach Works
The decision isn't complicated. Look at what you're doing.
Building a new feature? Progression testing. You need to validate the feature works before you worry about whether it breaks other things.
Fixing a bug? Regression testing. The fix itself needs validation (progression), but the real risk is breaking something unrelated (regression).
Refactoring code? Regression testing. The whole point of refactoring is that behavior doesn't change. You're validating that.
Updating a framework? Regression testing. The framework change might break things in unexpected ways. You need comprehensive validation.
The pattern: progression testing validates new behavior. Regression testing validates preserved behavior.
Here's where it gets interesting. Complex changes need both.
You're splitting a monolith into microservices. That's a big change. You need progression testing for the new service boundaries. Does inter-service communication work? Is data consistent across services?
But you also need regression testing for the user experience. From the user's perspective, nothing should change. The payment flow should work exactly the same way. The API should return the same responses. The UI should behave identically.
Different tests. Different purposes. Both necessary.
The Common Mistakes
Everyone makes the same mistakes with testing. Here are the big ones.
"Regression testing is only for big releases."
Wrong. Even small changes can break things. A CSS fix can cascade through your stylesheets and break layouts everywhere. A database index can change query plans and slow down critical operations.
The ISTQB Glossary is clear: regression testing detects defects "introduced or uncovered in unchanged areas of the software." Small changes. Unchanged areas. That's the risk.
"Unit tests are progression testing."
Not quite. Unit tests validate individual functions. Progression testing validates complete features. Big difference.
Your unit tests might prove that token generation works. But progression testing proves that authentication, token generation, token usage, and token refresh all work together as a complete OAuth flow.
"Automation replaces manual testing."
Automated tests validate known scenarios. Manual testing finds unknown problems. You need both.
Automated tests catch regressions in payment processing. Manual testing catches the subtle UI bug where the error message displays in the wrong color and users don't notice their payment failed.
"Test everything to be safe."
No. Test intelligently. Focus on critical functionality. High-traffic features. Revenue-generating processes. Security-sensitive operations.
Testing everything takes too long and finds too little. You spend hours running tests that validate rarely-used features while critical paths get insufficient coverage.
How This Works in CI/CD
Modern development happens in CI/CD pipelines. Your testing strategy needs to match.
When someone opens a pull request, run progression testing. Fast feedback on whether their changes work. This happens on feature branches.
When changes merge to main, run focused regression testing. Quick validation that critical paths still work. This happens continuously.
Before deploying to production, run comprehensive regression testing. Full validation of system stability. This happens at deployment gates.
The tools matter. GitHub Actions works for simple pipelines. Jenkins handles complex enterprise setups. CircleCI scales well for growing teams.
The frameworks matter too. Playwright and Cypress work well for modern web apps. Selenium handles comprehensive browser testing. BrowserStack covers device compatibility.
But the strategy matters most. Fast progression testing during development. Comprehensive regression testing before deployment. That's the pattern.
Who Tests What
In microservices architectures, testing gets distributed. According to DevOps research, business system teams develop and deploy their code on platforms maintained by platform teams.
This affects testing. Service teams own progression testing for their services. They know the features. They know the requirements. They validate their changes work.
Platform teams provide testing infrastructure. Shared test frameworks. System-level regression suites. Cross-service integration validation.
Quality engineering establishes standards. Testing strategies for complex scenarios. Automation frameworks. Best practices.
The ownership matters. When everyone owns testing, nobody owns testing. When testing ownership is clear, testing actually happens.
What Success Looks Like
How do you know if your testing strategy works?
For progression testing: are new features working when they ship? Are requirements met? Do acceptance criteria pass? Is development velocity maintained?
For regression testing: are production failures decreasing? Are critical paths stable? Are deployments reliable? Are user-impacting bugs caught before release?
Netflix's statistical framework shows what's possible at scale. Rapid regression detection. Systematic risk assessment. Automated validation.
But even at smaller scales, the patterns work. Test new functionality during development. Validate existing functionality before deployment. Catch problems before users do.
The Real Point
Testing isn't about process. It's about risk management.
Every code change carries risk. The new code might not work (progression risk). The new code might break old code (regression risk).
Progression testing manages the first risk. Regression testing manages the second. You need both.
The mistake most teams make: they treat testing as overhead. Something that slows them down. Something to minimize.
Wrong perspective. Testing is insurance. You pay a small cost during development to avoid large costs in production.
A failing test in CI/CD costs minutes. A production failure costs hours or days. Plus user trust. Plus revenue. Plus engineering morale.
The math isn't close.
But here's the thing: testing has to be strategic. You can't test everything. You can't test nothing. You need to test the right things at the right time.
That's what the progression versus regression distinction gives you. A framework for deciding what to test when.
Build something new? Test that it works (progression). Deploy any change? Test that existing things still work (regression).
Simple rules. Big impact.
What This Means
Most testing advice focuses on techniques. Use this framework. Write tests this way. Automate these scenarios.
That's all useful. But it misses the strategic question: what are you actually testing and why?
Progression versus regression testing answers that. You're testing new functionality to validate it works. You're testing old functionality to verify it still works. Different goals. Different approaches.
The teams that get this right catch problems early. The teams that don't get it spend their time firefighting production issues.
The choice seems obvious. But most teams still get it wrong. They either over-test everything (slow) or under-test and break production (expensive).
The middle path: test strategically. Progression testing during development. Regression testing before deployment. Focus on critical paths and high-risk areas.
That's not revolutionary. It's just systematic.
And systematic beats heroic every time.
For teams managing complex codebases across microservices, this gets harder. More services mean more integration points. More integration points mean more potential failures. More potential failures mean more testing requirements.
Tools that understand system-wide context help here. Try Augment Code to identify testing gaps and optimize coverage across distributed systems.
But tools don't replace strategy. You still need to understand what you're testing and why. Progression versus regression. New versus old. Validation versus verification.
Get that right and everything else follows.

Molisha Shah
GTM and Customer Champion