• Home
  • BVSSH
  • C4E
  • Playbooks
  • Frameworks
  • Good Reads
Search

What are you looking for?

Standard : AI teams operate with clear ownership and psychological safety

Purpose and Strategic Importance

This standard requires that AI teams have clear, documented ownership of their systems and operate in an environment where team members can raise concerns, flag risks, and challenge decisions without fear of negative consequences. It supports the policy of making AI work sustainable for the people who build it by recognising that unclear ownership and psychological unsafety are among the most damaging and under-acknowledged threats to AI team performance and ethics. Teams that cannot speak up about problems build worse systems and take greater risks.

Strategic Impact

  • Creates the conditions for early identification of AI risks, quality problems, and ethical concerns before they become incidents
  • Reduces attrition of senior AI practitioners who leave when they feel their concerns are systematically ignored
  • Enables honest project status reporting that gives leadership accurate information for investment and governance decisions
  • Fosters the collaborative, multi-disciplinary working practices that complex AI systems require across engineering, product, data, and ethics
  • Builds an organisational culture where AI safety concerns are surfaced and addressed rather than suppressed

Risks of Not Having This Standard

  • Safety and quality concerns go unreported because team members fear blame or dismissal; problems are discovered only after harm has occurred
  • Ownership ambiguity leads to gaps in model maintenance, monitoring, and governance that create unmanaged risk
  • High-performing AI engineers leave the organisation when they feel unable to influence decisions about the systems they are responsible for
  • Teams operate under chronic stress without escalation pathways, leading to burnout and decision-making shortcuts
  • AI projects fail silently because team members see the problems coming but do not feel safe reporting them upward

CMMI Maturity Model

Level 1 – Initial

Category Description
People & Culture - Ownership of AI systems is ambiguous; multiple teams share responsibility for the same system without clear demarcation
Process & Governance - No ownership registry; systems are orphaned during team reorganisations or staff changes
Technology & Tools - No tooling supports ownership tracking or team health measurement
Measurement & Metrics - Team health and psychological safety are not measured; concerns are expressed informally or not at all

Level 2 – Managed

Category Description
People & Culture - Named owners are assigned per AI system; team leads are aware of psychological safety as a concept and address concerns individually
Process & Governance - An ownership register is maintained; changes in ownership are communicated to affected teams
Technology & Tools - Basic retrospective practices give team members a regular forum to raise concerns
Measurement & Metrics - Team health is informally assessed in retrospectives; patterns of concern are noted and escalated by team leads

Level 3 – Defined

Category Description
People & Culture - Psychological safety is explicitly valued; leaders model vulnerability and constructive challenge as expected behaviours
Process & Governance - Ownership accountability is documented in team charters; a governance escalation pathway enables team members to raise concerns about AI safety or ethics outside their line management
Technology & Tools - Regular structured surveys (e.g. Google's Project Aristotle framework) measure psychological safety per team; results are reviewed by senior leadership
Measurement & Metrics - Psychological safety scores are tracked per team over time; declining scores trigger leadership conversations and structured interventions

Level 4 – Quantitatively Managed

Category Description
People & Culture - Psychological safety metrics are treated as leading indicators of AI quality and safety risk; low-scoring teams receive targeted support
Process & Governance - AI safety concern escalation routes are tested and validated; the organisation measures the rate at which concerns are raised, acknowledged, and acted upon
Technology & Tools - Anonymous concern-raising mechanisms are available; concern trends are analysed for systemic patterns
Measurement & Metrics - Concern raise rate, concern resolution rate, and concern-to-incident correlation are tracked to validate the effectiveness of psychological safety investment

Level 5 – Optimising

Category Description
People & Culture - The organisation builds a reputation as an AI employer of choice by demonstrating that team wellbeing and safety voice are genuine priorities
Process & Governance - Psychological safety standards are continuously refined based on incident retrospectives that examine whether safety concerns were available but not acted upon
Technology & Tools - AI team health data feeds workforce planning and manager effectiveness programmes
Measurement & Metrics - Long-term correlation between team psychological safety scores and AI system quality, safety record, and team retention is tracked and published internally

Key Measures

  • Psychological safety score per AI team measured quarterly using a validated survey instrument
  • Percentage of AI systems with a named, current owner documented in the ownership registry
  • Number of AI safety or ethics concerns raised through formal channels in the last quarter
  • Concern resolution rate (proportion of raised concerns acknowledged and actioned within defined timescale)
  • AI team voluntary attrition rate benchmarked against organisational average
Associated Policies
Associated Practices
  • AI Retrospectives
  • Cross-Functional AI Team Design
  • AI Working Agreements
  • AI Knowledge Sharing and Demos
  • AI On-Call and Incident Ownership
  • Inner-Source for AI Components

Technical debt is like junk food - easy now, painful later.

Awesome Blogs
  • LinkedIn Engineering
  • Github Engineering
  • Uber Engineering
  • Code as Craft
  • Medium.engineering