• Home
  • BVSSH
  • C4E
  • Playbooks
  • Frameworks
  • Good Reads
Search

What are you looking for?

Standard : AI users have accessible mechanisms to challenge or correct AI outputs

Purpose and Strategic Importance

This standard requires that any person who interacts with or is affected by an AI system output must have access to a clear, accessible mechanism to challenge that output, request a human review, or provide a correction. It supports the policy of making AI transparent to the people it affects by recognising that transparency without agency is insufficient. The ability to contest an AI decision is both an ethical obligation and, in many jurisdictions, a legal right under data protection and automated decision-making legislation.

Strategic Impact

  • Protects individuals from being harmed by uncorrectable AI errors by providing a meaningful pathway to remedy
  • Generates a rich source of real-world correction data that can be used to improve model quality over time
  • Builds user trust and long-term adoption of AI systems by demonstrating that the organisation is accountable for AI outputs
  • Reduces regulatory and legal risk in jurisdictions where automated decision-making rights of challenge are enshrined in law
  • Creates a feedback signal that surfaces systematic AI errors that would otherwise be invisible to the development team

Risks of Not Having This Standard

  • Users who receive incorrect or harmful AI outputs have no recourse, compounding harm and destroying trust
  • The organisation faces regulatory action when affected individuals cannot exercise their legal rights to challenge automated decisions
  • Systematic AI errors affecting large populations go undetected because no mechanism exists to aggregate individual correction signals
  • Users disengage from AI-enabled products when they feel powerless to influence or correct AI behaviour that affects them
  • The organisation becomes liable for decisions it cannot retrospectively review or correct because no challenge pathway was built

CMMI Maturity Model

Level 1 – Initial

Category Description
People & Culture - No consideration is given to user challenge rights; AI outputs are presented as final with no visible correction pathway
Process & Governance - No policy requires challenge mechanisms; this is treated as a future concern
Technology & Tools - No in-product mechanism for users to flag, challenge, or correct AI outputs
Measurement & Metrics - No tracking of user challenges or corrections; the organisation is blind to user dissatisfaction with AI outputs

Level 2 – Managed

Category Description
People & Culture - Teams acknowledge that users should be able to challenge AI outputs; a general feedback mechanism is available
Process & Governance - A contact route for AI output challenges is documented; responses are handled by customer support without a defined AI-specific process
Technology & Tools - A feedback button or contact form is accessible from AI-generated outputs; submissions are routed to a shared inbox
Measurement & Metrics - Volume of AI output challenges is tracked; no structured analysis of challenge outcomes is conducted

Level 3 – Defined

Category Description
People & Culture - Challenge mechanisms are designed with users as co-creators; usability testing ensures that the pathway is genuinely accessible, not just technically available
Process & Governance - A defined challenge handling process specifies response times, escalation routes, and the circumstances under which a human review is triggered
Technology & Tools - In-product challenge mechanisms are contextually embedded in AI output interfaces; corrections submitted by users are captured and routed to the ML team
Measurement & Metrics - Challenge volume, resolution rate, resolution time, and challenge outcome (AI output changed versus upheld) are tracked per AI system

Level 4 – Quantitatively Managed

Category Description
People & Culture - Challenge data is treated as a quality signal; high challenge rates trigger model review as well as customer service response
Process & Governance - Challenge resolution SLAs are defined per use case risk tier; high-risk decisions have faster response requirements
Technology & Tools - Challenge data is integrated into model training pipelines; corrections inform retraining and evaluation datasets
Measurement & Metrics - Challenge rate, resolution SLA compliance, user satisfaction with challenge resolution, and model quality improvement attributable to challenge data are measured

Level 5 – Optimising

Category Description
People & Culture - Challenge mechanisms are continuously improved based on user accessibility research; the organisation proactively reaches out to under-represented groups to understand barriers to challenge
Process & Governance - Challenge handling standards are continuously updated based on regulatory developments and user experience research
Technology & Tools - Intelligent challenge routing uses AI to triage challenges and direct them to the most appropriate resolution pathway
Measurement & Metrics - Longitudinal analysis tracks whether AI output quality improvements reduce challenge rates over time; this is used as a measure of responsible AI progress

Key Measures

  • Percentage of AI-facing user interfaces with an accessible, contextually embedded challenge or correction mechanism
  • Challenge resolution rate within defined SLA per AI system
  • Challenge outcome distribution (AI output changed versus upheld) as a proxy for output quality
  • Volume of user corrections incorporated into model training datasets per quarter
  • User satisfaction score with challenge and correction experience measured through post-resolution surveys
Associated Policies
Associated Practices
  • AI Incident Response
  • Model Explainability Techniques

Technical debt is like junk food - easy now, painful later.

Awesome Blogs
  • LinkedIn Engineering
  • Github Engineering
  • Uber Engineering
  • Code as Craft
  • Medium.engineering