• Home
  • BVSSH
  • C4E
  • Playbooks
  • Frameworks
  • Good Reads
Search

What are you looking for?

Policy : Make AI Transparent to the People It Affects

Commitment to AI Transparency for Affected Individuals People have a legitimate interest in knowing when AI is being used to make decisions about them — decisions that affect their access to services, their opportunities, their treatment by organisations, or their experience of the world. Organisations that deploy AI systems affecting individuals without disclosure are making an ethical choice — that the convenience of non-disclosure outweighs the individual's right to know. Our commitment is to the opposite choice: to be transparent with the people our AI systems affect, to tell them what is happening, to explain what it means, and to give them genuine recourse.

What This Means Transparency for affected individuals means more than a privacy policy footnote. It means designing AI-affected interactions so that people can clearly understand that AI is involved, what role the AI played, and what options they have. It means making that information accessible — not buried in legal language or hidden behind multiple navigation steps. And it means ensuring that when people ask about AI involvement in decisions affecting them, they receive meaningful answers rather than deflection.

Our commitment to making AI transparent to the people it affects is built on:

  • Disclosure by Design – AI involvement in decisions affecting individuals is disclosed proactively, at the point of interaction — not retrospectively and not only when asked. Disclosure is part of the UX design, not a compliance afterthought. The form and level of disclosure is proportionate to the stakes involved.
  • Plain Language Explanation – Disclosures about AI involvement are written in plain language appropriate to the audience. Technical descriptions of model architecture do not constitute meaningful disclosure. People need to understand what the AI is doing and why in terms they can actually engage with.
  • Decision Basis Communication – Where AI is used to inform or make decisions with material consequences, people are told the basis on which the decision was made — the key factors the AI considered — in terms that allow them to understand and respond to the decision meaningfully.
  • Contestability and Redress – People affected by AI-informed decisions have a clear, accessible route to contest those decisions and request human review. Contestability pathways are designed to be genuinely usable — not so burdensome that they effectively block challenge.
  • Right to Know and Opt Out – Where legal requirements or organisational policy allow, individuals have the ability to request information about how AI has been used in decisions affecting them and to request human review as an alternative. We support these rights proactively, not minimally.
  • Transparency About Limitations – We are transparent about the limitations of AI systems affecting individuals — what the system is not good at, where it may be less reliable, and what steps we take to mitigate those limitations. Transparency about limitations builds more durable trust than silence about them.
  • Transparency Standards Review – Our disclosure and transparency practices are reviewed periodically against evolving regulatory requirements, best practice guidance, and user feedback. Transparency standards are not set once and left — they improve over time as understanding of what constitutes meaningful disclosure develops.

Why This Matters The legitimacy of AI systems that affect individuals depends substantially on whether those individuals are treated as informed participants rather than subjects of unexplained processes. Regulatory frameworks including GDPR's right to explanation, the EU AI Act's transparency requirements, and sector-specific regulations are increasingly codifying the legal minimum for AI disclosure. Beyond legal compliance, transparency is the ethical foundation of AI deployment at scale — it is how the organisation demonstrates that it respects the agency and dignity of the people its systems affect.

Our Expectation Every AI system that makes or informs decisions affecting individuals has a documented disclosure strategy, proactive disclosure mechanisms built into the user experience, and accessible contestability pathways. Organisations that deploy AI affecting individuals without meaningful transparency are eroding the social trust that makes AI deployment sustainable. Making AI transparent to the people it affects is how we build AI that makes interactions Happier — more honest, more respectful, and more worthy of the trust we ask people to place in us.

Associated Standards

Technical debt is like junk food - easy now, painful later.

Awesome Blogs
  • LinkedIn Engineering
  • Github Engineering
  • Uber Engineering
  • Code as Craft
  • Medium.engineering