Standard : AI investment decisions are informed by value realisation data
Purpose and Strategic Importance
This standard requires that decisions to continue, scale, pause, or stop AI investment must be grounded in value realisation data collected from live systems — not just projections made at project inception. It supports the policy of aligning AI investment to measurable outcomes by creating a feedback loop between deployed AI outcomes and funding decisions. Without this feedback loop, organisations continue investing in AI systems that have stopped delivering value while starving high-potential opportunities of resources.
Strategic Impact
- Directs AI budget toward initiatives with demonstrated impact and away from projects that are producing outputs but not outcomes
- Builds board and executive confidence in AI investment by grounding funding conversations in evidence rather than aspiration
- Enables a portfolio approach to AI investment where scaling decisions are driven by real-world signals
- Creates accountability for AI teams by connecting their work to the metrics that drive investment decisions
- Surfaces the true cost of AI at scale — including infrastructure, maintenance, and human oversight — enabling accurate ROI calculation
Risks of Not Having This Standard
- Organisations continue funding AI projects that are technically live but commercially irrelevant
- Investment decisions are made on the basis of vanity metrics (models deployed, predictions made) rather than business outcomes
- The AI portfolio grows in breadth without growing in value, creating a maintenance burden that crowds out new development
- Budget cycles are disconnected from delivery reality, creating boom-and-bust patterns in AI investment
- Leadership loses confidence in AI due to inability to answer the question "what are we getting for this investment?"
CMMI Maturity Model
Level 1 – Initial
| Category |
Description |
| People & Culture |
- Investment decisions are made based on vendor proposals and internal advocacy with no empirical data |
| Process & Governance |
- No mechanism exists to review AI investment against realised outcomes; projects are funded until budgets run out |
| Technology & Tools |
- No tooling links AI project delivery to business metric changes; correlation is assumed rather than measured |
| Measurement & Metrics |
- No post-deployment value tracking; the organisation cannot quantify the return on its AI investment |
Level 2 – Managed
| Category |
Description |
| People & Culture |
- Teams are expected to report on business metric progress at project reviews; some light attribution analysis is attempted |
| Process & Governance |
- Investment reviews include a check on whether deployed AI systems are meeting their stated KPIs |
| Technology & Tools |
- Business dashboards include AI-attributed metrics alongside operational benchmarks |
| Measurement & Metrics |
- Key business metrics are tracked before and after AI deployment; simple before-after comparisons are reported |
Level 3 – Defined
| Category |
Description |
| People & Culture |
- Value realisation review is a formal governance checkpoint; product managers are accountable for reporting on outcome delivery |
| Process & Governance |
- A defined value realisation framework tracks leading and lagging indicators per AI investment; reviews occur on a quarterly cadence |
| Technology & Tools |
- A value tracking platform links deployed AI systems to their associated business KPIs and updates them continuously |
| Measurement & Metrics |
- Realised value (financial and non-financial) is calculated per deployed AI system and reported to investment governance forums |
Level 4 – Quantitatively Managed
| Category |
Description |
| People & Culture |
- Investment decisions are governed by a portfolio review board that uses realisation data as the primary input |
| Process & Governance |
- Threshold-based investment governance pauses or redirects funding when value realisation falls below defined levels |
| Technology & Tools |
- Attribution modelling is applied to distinguish AI contribution from other factors affecting business metrics |
| Measurement & Metrics |
- Portfolio-level ROI is calculated quarterly; investment efficiency (value per pound spent) is tracked per AI capability area |
Level 5 – Optimising
| Category |
Description |
| People & Culture |
- Realisation data is used proactively to shape the AI strategy, not just report on current performance |
| Process & Governance |
- Investment governance frameworks are continuously refined based on learnings from realisation data and external benchmarking |
| Technology & Tools |
- Predictive investment models use historical realisation data to forecast the likely return of proposed AI initiatives |
| Measurement & Metrics |
- Value realisation trends are benchmarked externally; the organisation can articulate its AI investment efficiency relative to peers |
Key Measures
- Percentage of live AI systems with active value realisation tracking linked to defined business KPIs
- Proportion of AI investment decisions made in the last quarter that were explicitly informed by realisation data
- Average time from AI deployment to first value realisation measurement
- Number of AI investments paused or redirected based on value realisation data in the last review cycle
- Portfolio-level ROI from AI investment calculated against original business case projections