Commitment to Human Oversight of AI Decisions The question of when AI decisions require human review is not primarily a technical question — it is an ethical, legal, and organisational one. It asks: who is accountable for this decision? What are the consequences if it is wrong? Can the affected person challenge it? Is the AI system reliable enough, and the stakes low enough, for automation to be appropriate? Our commitment is to answer these questions honestly for every AI system we build, and to design human review mechanisms that are genuinely effective — not checkbox processes that provide the appearance of oversight without the substance.
What This Means Human-in-the-loop oversight means different things at different risk levels. For low-stakes, high-volume automation with strong safeguards, it may mean periodic sampling and review. For high-stakes decisions affecting individuals' access to services, opportunities, or resources, it means human sign-off on every decision. The right level of oversight is determined by the stakes, the system's demonstrated reliability, and the regulatory requirements — not by what is most convenient for throughput.
Our commitment to ensuring AI decisions are reviewable by humans is built on:
Why This Matters Regulatory frameworks including the EU AI Act, financial services regulations, and sector-specific requirements are increasingly mandating human oversight for high-risk AI systems. Beyond regulatory compliance, human oversight is the mechanism by which we maintain meaningful accountability for decisions that affect people. An organisation that cannot point to a human decision-maker accountable for the outcomes of its AI systems is an organisation that has outsourced its accountability to a system that cannot hold it. Human review is not a constraint on AI efficiency — it is the governance structure that makes AI deployment legitimate.
Our Expectation Every AI system with material consequences for individuals or the organisation has a documented human oversight model proportionate to its risk tier, with effective review mechanisms, defined escalation paths, and accountability structures that identify named humans responsible for decision outcomes. AI decisions that are not reviewable by humans are not deployable in high-stakes contexts. Ensuring human reviewability is how we keep AI Safer — and how we maintain the accountability that stakeholders, regulators, and the people we affect rightfully expect.