• Home
  • BVSSH
  • C4E
  • Playbooks
  • Frameworks
  • Good Reads
Search

What are you looking for?

Policy : Design for Explainability, Not Just Accuracy

Commitment to Explainable AI An AI system that produces accurate outputs no one can explain is not an asset — it is a risk. When an AI system makes or informs decisions that affect people's lives, opportunities, or experiences, those people have a legitimate interest in understanding why. So do the teams responsible for operating those systems, the business leaders accountable for their outcomes, and the regulators increasingly mandating explainability as a legal requirement. Our commitment is to make explainability a design requirement, not an afterthought bolted on to satisfy compliance — because systems that cannot be interrogated cannot be trusted.

What This Means Explainability does not mean every model must be a simple linear regression. It means that for any AI system making consequential decisions, we can articulate — to an appropriate level of detail — what features drove an output, why a particular decision was reached, and what would need to change for a different outcome to result. It means building explanation mechanisms into the system from the start, not retrofitting them under pressure. And it means being honest about the limits of explanation for different model architectures.

Our commitment to explainability is built on:

  • Explainability as a Design Requirement – When scoping any AI system, we define the explainability requirements upfront. What level of explanation will users need? What will regulators require? What will the business need for audit and accountability? These requirements shape architecture decisions from day one.
  • Appropriate Model Complexity – We do not default to the most complex model available. Where simpler, inherently interpretable models meet the accuracy requirements, we prefer them. Complexity is justified by demonstrated performance improvement, not assumed.
  • Post-Hoc Explanation Tooling – Where complex models are justified, we invest in post-hoc explanation tools (such as SHAP, LIME, or attention visualisation) and validate that the explanations they produce are faithful to the model's actual reasoning — not just plausible-sounding narratives.
  • Decision Audit Trails – AI-assisted decisions are logged with the inputs, outputs, and key features that drove the output. This creates an audit trail that enables investigation of specific decisions, not just aggregate statistical review.
  • User-Facing Explanations – Where AI outputs are surfaced to end users, those users receive a meaningful explanation of why the system produced that output. Explanations are written for the audience — not for engineers.
  • Contestability Mechanisms – People affected by AI-informed decisions have a defined route to challenge those decisions and have them reviewed by a human. Explainability without contestability is incomplete.
  • Honest Limitation Disclosure – We are transparent about the limits of explanation for any given system. Where a model architecture makes meaningful explanation genuinely difficult, that constraint is documented and factored into decisions about whether the model is appropriate for the use case.

Why This Matters Regulators across jurisdictions are moving rapidly toward requiring explainability for consequential AI decisions. Beyond compliance, explainability is the foundation of legitimate AI use — it is how we earn and maintain the trust of the people our systems affect. A system that delivers accurate outputs but cannot explain itself is one misclassification away from a crisis it cannot defend. Explainability is not a constraint on AI capability; it is a condition for deploying AI responsibly.

Our Expectation Every AI system that makes or informs material decisions has documented explanation mechanisms proportionate to the stakes involved. Teams building AI systems that affect people are accountable for being able to explain what those systems do — not just demonstrate that they work. Designing AI that can be interrogated and challenged is how we build systems that are genuinely Better, not just numerically impressive.

Associated Standards

Technical debt is like junk food - easy now, painful later.

Awesome Blogs
  • LinkedIn Engineering
  • Github Engineering
  • Uber Engineering
  • Code as Craft
  • Medium.engineering