Standard : Test Coverage of Critical Paths
Description
Test Coverage of Critical Paths measures how thoroughly the most important business, platform, or data flows are validated through automated testing. It focuses not just on quantity, but quality—ensuring high-impact areas are protected by meaningful tests.
Critical paths include revenue-generating workflows, data transformations, platform services, and customer-facing APIs. Measuring their test coverage highlights risks and guides investment in resilience.
How to Use
What to Measure
- Identify and document the system’s critical paths (e.g. user login, billing, data ingestion).
- Assess the proportion of these paths covered by automated tests (unit, integration, e2e).
- Include test type, frequency of execution, and historical pass/fail rates.
Test Coverage (%) = (Covered Steps in Critical Path / Total Steps in Critical Path) × 100
You may also define a composite score:
- Coverage by test type (unit/integration/e2e)
- Coverage vs business impact
- Manual vs automated test dependency
Instrumentation Tips
- Use code analysis tools (e.g. SonarQube, Istanbul, Coverage.py) to assess automated test coverage.
- Maintain documentation or maps of business-critical flows.
- Tag and track tests associated with critical services, pipelines, or APIs.
- Visualise this data in dashboards to guide continuous testing improvements.
Why It Matters
- Prioritises effort: Focuses test investment on what matters most.
- Reduces production risk: Catches regressions early in sensitive flows.
- Supports safe change: Encourages teams to ship confidently and frequently.
- Highlights gaps: Makes testing blind spots visible and addressable.
Best Practices
- Co-create critical path maps with product, data, and platform stakeholders.
- Use TDD or BDD approaches to improve test intent and coverage quality.
- Automate tests as early as possible in the lifecycle (shift-left).
- Review test coverage for critical paths as part of change readiness and platform reviews.
- Include non-functional tests (e.g. latency, security) where applicable.
Common Pitfalls
- Over-relying on unit test coverage as a proxy for real validation.
- Not maintaining or evolving critical path definitions as systems grow.
- Ignoring cross-system dependencies (e.g. chained services or data hops).
- Testing only happy paths, missing edge cases and real-world usage.
Signals of Success
- Critical flows are clearly defined, visible, and well-covered by automated tests.
- Test failures lead to early detection, not production incidents.
- Teams feel confident in releasing changes to sensitive areas.
- Test coverage is discussed in platform governance and improvement cycles.
- [[Defect Escape Rate]]
- [[Change Failure Rate]]
- [[Quality Gate Compliance]]
- [[CoE/Engineering/Measures/Observability & Detection/Mean Time to Detect (MTTD)]]
- [[Time to Value]]