Commitment to AI Transparency for Affected Individuals People have a legitimate interest in knowing when AI is being used to make decisions about them — decisions that affect their access to services, their opportunities, their treatment by organisations, or their experience of the world. Organisations that deploy AI systems affecting individuals without disclosure are making an ethical choice — that the convenience of non-disclosure outweighs the individual's right to know. Our commitment is to the opposite choice: to be transparent with the people our AI systems affect, to tell them what is happening, to explain what it means, and to give them genuine recourse.
What This Means Transparency for affected individuals means more than a privacy policy footnote. It means designing AI-affected interactions so that people can clearly understand that AI is involved, what role the AI played, and what options they have. It means making that information accessible — not buried in legal language or hidden behind multiple navigation steps. And it means ensuring that when people ask about AI involvement in decisions affecting them, they receive meaningful answers rather than deflection.
Our commitment to making AI transparent to the people it affects is built on:
Why This Matters The legitimacy of AI systems that affect individuals depends substantially on whether those individuals are treated as informed participants rather than subjects of unexplained processes. Regulatory frameworks including GDPR's right to explanation, the EU AI Act's transparency requirements, and sector-specific regulations are increasingly codifying the legal minimum for AI disclosure. Beyond legal compliance, transparency is the ethical foundation of AI deployment at scale — it is how the organisation demonstrates that it respects the agency and dignity of the people its systems affect.
Our Expectation Every AI system that makes or informs decisions affecting individuals has a documented disclosure strategy, proactive disclosure mechanisms built into the user experience, and accessible contestability pathways. Organisations that deploy AI affecting individuals without meaningful transparency are eroding the social trust that makes AI deployment sustainable. Making AI transparent to the people it affects is how we build AI that makes interactions Happier — more honest, more respectful, and more worthy of the trust we ask people to place in us.