• Home
  • BVSSH
  • C4E
  • Playbooks
  • Frameworks
  • Good Reads
Search

What are you looking for?

Policy : Build AI Systems That People Want to Use

Commitment to AI That Earns Adoption There is a particular form of AI project failure that is invisible in technical metrics: the system that works perfectly and is never used. Models that achieve excellent accuracy on evaluation benchmarks but deliver confusing, slow, or untrustworthy experiences in practice. AI capabilities that are technically available but sit bypassed because the user experience does not earn the effort of behaviour change. Mandated adoption that produces compliance without genuine usage. Our commitment is to build AI systems that people genuinely want to use — because they make work easier, decisions better, or experiences more satisfying — not systems that require organisational pressure to sustain engagement.

What This Means Building AI systems people want to use means treating the user experience as a first-class design concern alongside model performance. It means understanding the workflows AI is being integrated into deeply enough to make that integration genuinely useful rather than an additional cognitive burden. It means being honest with users about what the AI can and cannot do. And it means measuring real adoption — not just installation or first-use rates, but sustained, voluntary engagement — as a primary indicator of AI product success.

Our commitment to building AI systems that people want to use is built on:

  • User Experience as a Design Requirement – The user experience of interacting with an AI system is a design requirement with the same status as model performance. UX investment is planned into AI delivery, not deferred to a post-launch polish phase. Design quality is assessed before deployment, not explained away after it.
  • Workflow Integration Research – Before designing an AI interface, we understand the workflow it will integrate into in depth. We map current user processes, identify friction points, and design AI integration that relieves that friction rather than adding new kinds of it. AI that interrupts or complicates existing workflows will not be adopted, regardless of its technical quality.
  • Trust Calibration by Design – AI systems are designed to help users calibrate appropriate trust — communicating confidence levels, surfacing uncertainty, and making the system's limitations visible rather than hiding them. Users who develop miscalibrated trust — either over-trusting or under-trusting the AI — will eventually make worse decisions than they would have without it.
  • Performance Matters – AI systems that are slow to respond erode adoption. Latency is an AI UX concern, not just an infrastructure concern. We set and enforce response time requirements for AI features with the same rigour applied to any user-facing system.
  • Adoption Measurement – We measure adoption rigorously: not just whether users have access to the AI system, but whether they are using it, continuing to use it over time, and finding it useful. Adoption metrics are tracked by user segment and used to identify groups for whom the AI is not delivering value.
  • User-Reported Experience – We collect and act on qualitative user feedback about AI systems. Quantitative adoption metrics tell us what is happening; qualitative feedback tells us why. Feedback channels are active, visible, and demonstrably acted upon — users who report problems see those problems addressed.
  • Voluntary Adoption as the Success Standard – Our AI systems are designed to earn adoption without mandates. Where adoption is low despite a genuine user need, we investigate whether the problem is with the AI experience — and we fix it. We do not substitute organisational pressure for the design work needed to make AI genuinely useful.

Why This Matters AI systems that are not used deliver no value regardless of their technical quality. The enormous investment in AI development is wasted if the resulting systems cannot earn and sustain user adoption. Mandated adoption creates compliance without genuine engagement, deprives the team of honest feedback, and erodes organisational trust in AI initiatives broadly. The organisations that get the most from AI are those whose people actively seek out AI tools because they have had real, positive experiences with well-designed AI systems.

Our Expectation AI systems are evaluated on adoption metrics alongside model performance metrics. Low adoption on a technically capable AI system is treated as a product failure requiring investigation and improvement, not a user adoption problem requiring a communications campaign. Building AI that people genuinely want to use is how we create the conditions for AI to make people Happier — not just more technically assisted.

Associated Standards

Technical debt is like junk food - easy now, painful later.

Awesome Blogs
  • LinkedIn Engineering
  • Github Engineering
  • Uber Engineering
  • Code as Craft
  • Medium.engineering