Standard : Engineers maintain the literacy required to safely apply and integrate AI capabilities
Purpose and Strategic Importance
As AI tools, APIs, and models become embedded in software delivery, engineers must develop the literacy to apply them responsibly — not just as productivity accelerators, but as components of production systems with real risks. This standard ensures engineers understand how to evaluate AI-generated outputs, recognise the limitations and failure modes of AI models, and make informed decisions when integrating AI capabilities into features, workflows, and architectures.
This is not merely about using AI copilots more effectively. It encompasses understanding prompt design, model behaviour, data provenance, bias and fairness considerations, and the security and compliance implications of incorporating third-party AI services. Engineers who possess this literacy protect their teams and organisations from compounding hidden risk, and are equipped to realise the genuine value AI can offer when applied with judgement and care.
Strategic Impact
- Teams that understand AI capabilities and limitations ship safer, more maintainable AI-powered features with fewer unintended consequences in production.
- Engineers with strong AI literacy can critically evaluate AI-generated code rather than blindly accepting suggestions, reducing defect introduction risk.
- Responsible AI integration builds organisational trust, reduces regulatory and reputational exposure, and ensures compliance with emerging AI governance frameworks.
- A literate engineering workforce accelerates adoption of AI in a controlled, auditable way, delivering competitive advantage without accumulating hidden technical or ethical debt.
Risks of Not Having This Standard
- AI-generated code is merged without review, introducing subtle bugs, security vulnerabilities, or licence-incompatible dependencies that are difficult to trace later.
- Engineers integrate third-party AI APIs without understanding data handling implications, creating privacy, compliance, or data sovereignty risks.
- Bias and fairness issues in AI-powered features go undetected until they cause user harm or trigger regulatory scrutiny.
- Teams become over-reliant on AI tooling without understanding what the tools actually do, eroding engineering judgment and the ability to debug AI-assisted systems.
- Inconsistent AI usage practices across teams create fragmented standards, duplicated effort, and uneven risk exposure across the organisation.
CMMI Maturity Model
Level 1 – Initial
| Category |
Description |
| People & Culture |
Engineers use AI tools opportunistically with no shared understanding of risks or responsible use. |
| Process & Governance |
No guidance exists on when, how, or whether to use AI tools in engineering work. |
| Technology & Tools |
AI tools are adopted individually based on personal preference with no organisational oversight. |
| Measurement & Metrics |
No tracking of AI tool usage, generated code volume, or associated defect or risk indicators. |
Level 2 – Managed
| Category |
Description |
| People & Culture |
Awareness of AI risks is growing and some engineers informally share guidance with peers. |
| Process & Governance |
Basic expectations around AI tool use are discussed but not formally documented or enforced. |
| Technology & Tools |
Teams begin to standardise on approved AI tools and establish informal review expectations. |
| Measurement & Metrics |
Defects attributed to AI-generated code are beginning to be identified but not systematically tracked. |
Level 3 – Defined
| Category |
Description |
| People & Culture |
Engineers are expected to understand AI fundamentals including prompt design, model limitations, and bias awareness. |
| Process & Governance |
Documented standards govern AI tool use, code review of AI output, and integration of AI-powered features. |
| Technology & Tools |
Approved AI tooling is defined, and code review processes explicitly address AI-generated contributions. |
| Measurement & Metrics |
Teams track AI literacy levels, training completion, and defects traceable to AI-assisted development. |
Level 4 – Quantitatively Managed
| Category |
Description |
| People & Culture |
AI literacy is assessed against defined competency frameworks and gaps are addressed through targeted learning plans. |
| Process & Governance |
AI integration decisions are subject to structured risk assessment covering security, bias, data, and compliance. |
| Technology & Tools |
Tooling supports automated detection of AI-generated code, flagging it for review and tracking its downstream impact. |
| Measurement & Metrics |
Quantitative measures track the ratio of AI-generated to human-reviewed code, defect rates, and compliance adherence. |
Level 5 – Optimising
| Category |
Description |
| People & Culture |
Engineers actively contribute to AI literacy communities of practice and help shape industry standards. |
| Process & Governance |
AI governance frameworks are continuously refined in response to new model capabilities and regulatory developments. |
| Technology & Tools |
The organisation experiments with and evaluates emerging AI capabilities in controlled environments before adoption. |
| Measurement & Metrics |
Continuous feedback loops connect AI literacy investment to measurable improvements in quality, safety, and velocity. |
Key Measures
- Percentage of engineers who have completed AI literacy training covering responsible use, prompt design, and risk awareness
- Rate of defects or security vulnerabilities traced to AI-generated code contributions in production
- Coverage of AI-generated code by peer review processes as a proportion of total AI-assisted contributions
- Number of AI integration incidents involving data privacy, compliance, or bias issues reported per quarter
- Proportion of teams with documented AI usage guidelines reviewed and approved by engineering leadership
- Engineer confidence scores in AI literacy self-assessments conducted as part of regular capability reviews