• Home
  • BVSSH
  • C4E
  • Playbooks
  • Frameworks
  • Good Reads
Search

What are you looking for?

Standard : AI use cases are selected based on validated business impact

Purpose and Strategic Importance

This standard requires that AI use cases must demonstrate validated evidence of business impact — through data analysis, user research, or a structured feasibility assessment — before engineering investment is approved. It supports the policy of aligning AI investment to measurable outcomes by ensuring that resources flow toward problems where AI can make a genuine, quantifiable difference. Use cases selected on the basis of novelty, vendor enthusiasm, or executive sponsorship alone consistently underdeliver on value.

Strategic Impact

  • Concentrates scarce AI talent and infrastructure on the opportunities with the highest return on investment
  • Creates a shared language between business and engineering for discussing AI priorities in terms of outcomes
  • Reduces the frequency of AI projects that are technically successful but deliver no measurable business benefit
  • Builds a prioritised pipeline of AI opportunities that reflects real organisational needs rather than technology fashion
  • Enables portfolio-level visibility into the expected and realised value of AI investment across the organisation

Risks of Not Having This Standard

  • Engineering effort is wasted on AI use cases that address low-impact problems or problems that do not exist at scale
  • Business stakeholders lose confidence in AI delivery after repeated projects that produce demos but not value
  • The organisation builds AI capability in the wrong areas, creating misaligned technical debt
  • High-value problems remain unsolved because they were not surfaced through a structured selection process
  • Budget for AI is reduced organisation-wide after the portfolio fails to demonstrate a credible return

CMMI Maturity Model

Level 1 – Initial

Category Description
People & Culture - Use cases are proposed informally based on individual enthusiasm or vendor pitches with no business validation
Process & Governance - No selection framework; projects start when a sponsor secures budget, regardless of evidence of impact
Technology & Tools - No tooling to assess or compare use case value; decisions are made in ad hoc meetings
Measurement & Metrics - No expected value is defined at project inception; success criteria are vague or absent

Level 2 – Managed

Category Description
People & Culture - Teams are expected to write a brief business case before starting AI work; product and business owners are involved in scoping
Process & Governance - A simple use case intake form captures problem statement, affected volume, and estimated impact before work begins
Technology & Tools - A shared backlog or register of AI opportunities is maintained; items are ranked by estimated business value
Measurement & Metrics - Each approved use case has a stated target metric and baseline; progress against the metric is reviewed at milestones

Level 3 – Defined

Category Description
People & Culture - A cross-functional panel (AI, business, data, risk) evaluates use cases against a defined impact framework before investment is approved
Process & Governance - A structured use case evaluation framework scores feasibility, data availability, strategic alignment, and estimated value
Technology & Tools - A value tracking tool links approved use cases to their KPIs; realised value is tracked from pilot through to production
Measurement & Metrics - Use cases are evaluated against a quantified impact threshold; those below the threshold are deprioritised or reshaped

Level 4 – Quantitatively Managed

Category Description
People & Culture - Investment decisions are informed by a portfolio view of expected versus realised value; teams are accountable for delivering against stated impact estimates
Process & Governance - Use case selection is reviewed quarterly; cases that fail to demonstrate impact within defined timeframes are deprioritised
Technology & Tools - Predictive models inform use case prioritisation based on historical delivery data and feasibility signals
Measurement & Metrics - ROI per use case is calculated at project close; prediction accuracy of impact estimates is tracked to improve future assessments

Level 5 – Optimising

Category Description
People & Culture - Teams contribute retrospective impact data that continuously improves the organisation's ability to predict AI value
Process & Governance - The use case selection framework is continuously refined based on delivery outcomes and changes in business strategy
Technology & Tools - AI-assisted opportunity discovery tools surface high-potential use cases from operational data and process analysis
Measurement & Metrics - Use case selection accuracy and portfolio-level value realisation are benchmarked against industry peers

Key Measures

  • Percentage of AI projects with a documented and validated business case before engineering starts
  • Average predicted versus realised business impact across completed AI use cases
  • Proportion of AI projects that achieved their stated impact target within the defined timeframe
  • Number of use cases deprioritised per quarter due to failure to meet impact threshold in the evaluation framework
  • Portfolio-level ROI from AI investment measured annually against baseline year
Associated Policies
Associated Practices
  • Business Impact Measurement
  • AI Use Case Discovery
  • AI Prototyping and PoC
  • Feasibility and Data Readiness Assessment
  • Value Hypothesis Testing

Technical debt is like junk food - easy now, painful later.

Awesome Blogs
  • LinkedIn Engineering
  • Github Engineering
  • Uber Engineering
  • Code as Craft
  • Medium.engineering