• Home
  • BVSSH
  • C4E
  • Playbooks
  • Frameworks
  • Good Reads
Search

What are you looking for?

Standard : AI governance frameworks are documented and followed across the lifecycle

Purpose and Strategic Importance

This standard requires that AI systems are governed by a documented framework that defines accountability, decision rights, review checkpoints, and escalation paths at every stage of the AI lifecycle — from inception through retirement. It supports the policy of governing AI models throughout their lifecycle by making governance a structured, continuous activity rather than a compliance exercise conducted only at deployment. AI governance that exists only on paper or only at launch provides the illusion of control without the substance.

Strategic Impact

  • Creates clear accountability for AI decisions and outcomes at every lifecycle stage, reducing ambiguity about who is responsible for what
  • Enables the organisation to demonstrate due diligence to regulators, auditors, and affected communities
  • Reduces the risk of AI systems continuing to operate beyond their safe or effective lifespan because retirement criteria are unclear
  • Provides the structural foundation for scaling AI across the organisation without losing oversight as the portfolio grows
  • Aligns AI governance with existing enterprise risk, legal, and compliance frameworks, reducing the overhead of maintaining parallel processes

Risks of Not Having This Standard

  • AI systems operate without clear ownership, enabling accountability gaps that are exposed only when incidents occur
  • Governance reviews that happen only at launch fail to catch issues that emerge during the production lifecycle
  • Regulatory compliance is impossible to demonstrate when governance activities are not documented and evidenced
  • The AI portfolio grows faster than the organisation's capacity to govern it, creating unmanaged risk accumulation
  • Retirement decisions are delayed indefinitely because no one has the authority or the criteria to decommission a model

CMMI Maturity Model

Level 1 – Initial

Category Description
People & Culture - AI governance is informal and reactive; accountability is assumed rather than assigned
Process & Governance - No documented governance framework; each AI project invents its own oversight approach or operates without one
Technology & Tools - No governance tooling; compliance evidence is ad hoc and inconsistently maintained
Measurement & Metrics - No metrics for governance compliance; adherence to any governance expectations is unverifiable

Level 2 – Managed

Category Description
People & Culture - A governance owner is identified for each AI system; roles and responsibilities are informally communicated
Process & Governance - A basic governance document is produced for each AI system covering ownership, decision rights, and a review schedule
Technology & Tools - Governance records are stored in a shared repository; review minutes and decision logs are maintained
Measurement & Metrics - Governance review completion rate is tracked; overdue reviews are flagged to team leads

Level 3 – Defined

Category Description
People & Culture - A defined AI governance framework is understood by all team members; governance responsibilities are embedded in job descriptions and team charters
Process & Governance - A comprehensive lifecycle governance framework covers inception review, deployment approval, post-deployment monitoring, periodic review, and retirement criteria
Technology & Tools - A governance management platform tracks review schedules, ownership assignments, and compliance status across the AI portfolio
Measurement & Metrics - Governance compliance rate, review cadence adherence, and outstanding governance actions are reported to an AI governance board

Level 4 – Quantitatively Managed

Category Description
People & Culture - Governance effectiveness as well as compliance is measured; teams assess whether governance activities are producing better outcomes
Process & Governance - Governance frameworks are tiered by AI risk class; higher-risk systems have more frequent and rigorous review requirements
Technology & Tools - Governance workflow automation triggers reviews, collects evidence, and escalates non-compliance automatically
Measurement & Metrics - Governance quality metrics (completeness of evidence, reviewer qualification, decision traceability) are tracked alongside compliance volume metrics

Level 5 – Optimising

Category Description
People & Culture - Governance learnings from incidents and reviews are systematically incorporated into framework updates; governance is treated as a continuously improving practice
Process & Governance - The governance framework is benchmarked against regulatory requirements and industry standards; gaps are identified and closed proactively
Technology & Tools - AI-assisted governance tooling supports risk assessment, evidence collection, and anomaly detection across the portfolio
Measurement & Metrics - Governance maturity is assessed annually against external benchmarks; improvement plans are published and tracked

Key Measures

  • Percentage of production AI systems with a current, documented governance record meeting the defined framework standard
  • Governance review completion rate against the defined review schedule per AI risk tier
  • Number of AI systems without an identified governance owner at any point in the quarter
  • Average time from governance action identification to resolution
  • Number of AI governance framework updates made in the last year based on incident learnings or regulatory changes
Associated Policies
Associated Practices
  • AI Lifecycle Governance
  • Responsible AI Framework Adoption
  • AI Risk Assessment
  • AI Policy Compliance Checking
  • AI Ethics Review Board
  • Model Registry Management
  • Red-Teaming for AI
  • Model Reproducibility Standards
  • Model Card Documentation
  • Data Versioning and Lineage
  • AI Working Agreements

Technical debt is like junk food - easy now, painful later.

Awesome Blogs
  • LinkedIn Engineering
  • Github Engineering
  • Uber Engineering
  • Code as Craft
  • Medium.engineering