Explainability Coverage Rate measures the proportion of AI decisions for which a meaningful, accessible explanation is available to the affected person, the human reviewer, or the oversight function. It captures not just whether explanation tooling exists technically, but whether explanations are actually generated, surfaced, and understandable for the decisions that matter most.
Explainability is not a binary property of a model — it is a continuous property of the system as experienced by its stakeholders. A model with built-in attention visualisations that no user interface ever surfaces has zero effective explainability coverage. An explanation that is statistically accurate but incomprehensible to a non-expert reviewer provides no practical oversight value. This measure forces precision: for what percentage of consequential decisions can the affected person actually understand why the AI decided what it decided?
Explainability Coverage Rate = (High-Stakes Decisions with Accessible Explanation / Total High-Stakes Decisions) × 100
Optional:
(Users correctly identifying primary decision factor / Total users surveyed) × 100| Metric Range | Interpretation |
|---|---|
| 100% coverage on all high-stakes decisions | Required for high-risk AI systems in regulated contexts |
| 95–99% coverage | Good — investigate edge cases causing coverage gaps |
| 80–94% coverage | Needs improvement — significant proportion of consequential decisions lack explanation |
| < 80% coverage | Insufficient — explainability requirement is not being met; governance risk is high |
Affected individuals have a right to understand decisions that affect them The EU GDPR Article 22, the EU AI Act, and various national AI regulatory frameworks establish rights to explanation for consequential automated decisions. Coverage rate measurement operationalises compliance with these rights.
Explanations are the mechanism through which humans exercise AI oversight Human reviewers who do not have access to explanations cannot meaningfully evaluate AI decisions — they can only accept or reject without understanding. Explainability coverage is a prerequisite for genuine human oversight.
Explanation gaps concentrate in edge cases — exactly where bias and error are most likely If the explanation system fails silently for unusual input combinations, the decisions that are most likely to be wrong are precisely the ones without explanations. Coverage measurement catches these dangerous gaps.
Explanations build or break institutional trust in AI When users, regulators, and oversight functions can understand why an AI system makes decisions, trust in the system is grounded and sustainable. When explanations are unavailable, any trust is blind faith that can collapse at the first failure.
Doshi-Velez & Kim — Towards a Rigorous Science of Interpretable Machine Learning (arXiv 2017) This seminal paper proposes a taxonomy of interpretability evaluation that distinguishes between application-grounded (real-user testing), human-grounded (proxy user testing), and functionally-grounded (proxy metric) evaluation — providing a framework for selecting appropriate explainability coverage measurement approaches by use case.
Wachter, Mittelstadt, Russell — Counterfactual Explanations Without Opening the Black Box (Harvard Journal of Law & Technology 2017) This paper introduces counterfactual explanations as a legally-aligned approach to AI explainability, proposing that the most useful explanations for affected individuals answer the question "what would need to change for the decision to be different?" — a practically accessible explanation format that informs how coverage and comprehension should be evaluated.