Commitment to AI Auditability and Traceability Every consequential decision an AI system makes or informs should be traceable — to the input that produced it, the model version that processed it, the data that model was trained on, and the output that resulted. Without this traceability, AI systems are black boxes: they produce outputs that affect people, and when those outputs are questioned, investigated, or challenged, there is no record to examine. Our commitment is to build AI systems with auditability as a first-class design requirement — logging AI behaviour in a way that supports accountability, enables investigation, and meets the regulatory expectations that are increasingly codified in law.
What This Means Auditability means maintaining a durable, structured record of AI system behaviour that is sufficient for post-hoc investigation of specific decisions, aggregate analysis of system performance, and demonstration of compliance with applicable regulatory requirements. It means designing logging infrastructure as part of the AI system architecture, not adding it later. And it means ensuring audit logs are protected, retained for appropriate periods, and accessible to the people who need them — while being protected from the people who should not have access.
Our commitment to making AI behaviour auditable and traceable is built on:
Why This Matters Regulators across multiple jurisdictions are moving toward mandatory auditability requirements for consequential AI systems. Beyond regulatory compliance, auditability is the foundation of accountability — the mechanism by which the organisation can demonstrate that its AI systems behaved appropriately, and investigate and remedy cases where they did not. AI systems that cannot be audited are AI systems that cannot be trusted with decisions that matter. The ability to look back at what an AI system did, and why, is not a nice-to-have — it is a governance requirement for any system deployed at scale.
Our Expectation Every AI system that makes or informs material decisions has documented, tested audit logging that meets the retention and accessibility requirements of its risk tier. Teams that deploy AI systems without audit logging are not making architectural trade-offs — they are building systems that cannot be governed. Making AI behaviour auditable and traceable is how we ensure our AI systems are Safer and worthy of the trust placed in them.