Skip to main content
Back to Blog
Testing

Your Test Coverage Number Is Lying to You

January 28, 20268 min read
TestingQACode CoverageStrategyEngineering

Your Test Coverage Number Is Lying to You

I've seen codebases with 95% test coverage that ship critical bugs weekly. I've seen codebases with 40% coverage that rarely break.

The number isn't the problem. The obsession with the number is.

The Coverage Trap

Here's a test that increases coverage but catches nothing:

```python def test_user_model(): user = User(name="test", email="test@test.com") assert user.name == "test" assert user.email == "test@test.com" ```

Congratulations, you've covered the User model. You've tested that Python assignment works. You've caught zero bugs.

Now here's a test at 0% model coverage that catches real problems:

```python def test_duplicate_email_rejected(): create_user(email="jason@test.com") with pytest.raises(IntegrityError): create_user(email="jason@test.com") ```

This test doesn't care about coverage. It cares about a business rule: emails must be unique. If someone removes the unique constraint during a migration, this test screams.

What I Actually Measure

Instead of line coverage, I track three things:

1. Critical path coverage. Can a user sign up, subscribe, and use the core feature? If those 3 flows are tested end-to-end, I sleep fine. Everything else is bonus.

2. Bug recurrence rate. Every production bug gets a regression test. If the same bug appears twice, that's a process failure, not a code failure.

3. Change failure rate. What percentage of deployments cause incidents? This tells you whether your tests are catching the right things. High coverage + high change failure rate = you're testing the wrong stuff.

The Coverage Map I Actually Use

I think of my codebase as a risk map, not a coverage report:

Zone Risk Test Strategy
Payment/billing Catastrophic 100% critical path + edge cases
Authentication High Full flow testing + security scenarios
Data mutations High Constraint testing + migration testing
API contracts Medium Schema validation + contract tests
UI rendering Low Smoke tests + visual regression
Internal utils Very low Only test if complex logic

I don't aim for 80% everywhere. I aim for 100% in the red zone and 20% in the green zone. The weighted risk is what matters.

The Honest Conversation

When a manager asks "what's our test coverage?" they're really asking "how confident are you that this deploy won't break?" Coverage percentage doesn't answer that question.

What answers it:

  • "Every payment flow has end-to-end tests"
  • "We've never had the same bug twice"
  • "Our last 30 deploys had zero rollbacks"

Those are confidence metrics. Coverage is a vanity metric.

My Rule of Thumb

If I'm spending more time maintaining tests than the tests are saving me in bug prevention, I've over-tested. Tests are an investment. Like any investment, the return should exceed the cost.

Write tests that make you money (prevent costly bugs). Skip tests that cost you money (slow down development without catching anything).

Want to see this in action?

Check out the projects and case studies behind these articles.