• Home
  • BVSSH
  • C4E
  • Playbooks
  • Frameworks
  • Good Reads
Search

What are you looking for?

Policy : Involve Users in AI Design and Evaluation

Commitment to User Participation in AI Development AI systems are built by people who are not, in most cases, the people who will use them. This creates a knowledge gap that technical excellence cannot close. The people who will use an AI system understand the nuances of their work, the edge cases that matter, the ways that simplified model assumptions diverge from complex operational reality, and the user experience requirements that determine whether a tool is genuinely useful or merely technically functional. Our commitment is to actively involve users in designing, testing, and evaluating AI systems — not as a UX formality, but as a genuine source of domain knowledge and feedback that shapes what we build and how we build it.

What This Means Involving users in AI development means building sustained, structured participation mechanisms into the AI delivery process — from discovery through design, prototype testing, evaluation, post-deployment feedback, and ongoing improvement. It means treating user input as substantive input that shapes decisions, not as a consultation box to tick. And it means including users who will be most affected by the AI system — including those who may be disadvantaged by it — not just the users who are most convenient to engage.

Our commitment to involving users in AI design and evaluation is built on:

  • User Research Before Design – AI design begins with structured user research: interviews, observation, workflow analysis, and problem validation with real users. Design decisions are grounded in what users actually need, not in assumptions made by people who do not do the work the AI is intended to support.
  • Prototype Testing with Real Users – Prototypes are tested with users who represent the range of people who will use the production system — including less technically fluent users, users in different organisational contexts, and users whose needs may differ from the assumed majority. Prototype testing is iterative, not a single validation gate.
  • Diverse User Representation – User participation includes people from the full range of groups the AI system will affect. We actively seek participation from users who may be at risk of being disadvantaged by the system, not just from users who are most similar to the team building it.
  • User Evaluation Panels – For AI systems with significant impact, we establish ongoing user evaluation panels that provide structured feedback on system performance, fairness, and user experience at regular intervals post-deployment. Panels are compensated and representative.
  • Feedback That Changes Decisions – User feedback is not collected as a documentation exercise. We track how user feedback has influenced design and delivery decisions and make that influence visible — to the team and to the users who provided it. Users who cannot see their feedback making a difference stop providing it.
  • User-Facing AI Literacy – We invest in helping users understand what AI systems can and cannot do, how to interpret AI outputs, and how to use AI tools effectively. Users who lack AI literacy cannot provide effective feedback or make good use of AI assistance — user education is part of the product.
  • Post-Deployment User Engagement – User involvement does not end at deployment. We maintain active post-deployment engagement: surveys, interviews, usage analysis, and feedback channels. AI system improvement is driven by ongoing user insight, not just by engineering judgement.

Why This Matters AI systems designed without genuine user involvement consistently miss the mark in ways that would have been avoidable with earlier user engagement. They address the wrong part of the problem, use terminology that means something different to users than to engineers, create workflow friction that was invisible from the outside, or fail to account for the diversity of the user population. User involvement is not a UX courtesy — it is a delivery quality mechanism that produces better AI systems faster because it catches wrong assumptions early rather than building them into production.

Our Expectation Every AI system has a documented user involvement plan, with structured participation activities at each delivery stage and a visible record of how user input has shaped the system. AI systems that cannot demonstrate meaningful user participation in their design and evaluation have not met this standard. Involving users actively in AI design and evaluation is how we build AI that makes the people who use it genuinely Happier.

Associated Standards

Technical debt is like junk food - easy now, painful later.

Awesome Blogs
  • LinkedIn Engineering
  • Github Engineering
  • Uber Engineering
  • Code as Craft
  • Medium.engineering