Standard : Engineers are not required to deploy AI systems they have safety concerns about
Purpose and Strategic Importance
This standard establishes that no engineer or technical practitioner can be required, pressured, or incentivised to deploy an AI system they have documented, good-faith safety concerns about. It supports the policy of making AI work sustainable for the people who build it by recognising that engineers are often the first to identify safety issues and must have a protected channel to raise those concerns without career risk. This standard is the technical and human complement to formal governance — it ensures that the people closest to the code have meaningful authority to act on what they know.
Strategic Impact
- Creates a last line of defence against deploying AI systems with known safety problems by empowering the people best positioned to detect them
- Builds engineer trust in the organisation's commitment to responsible AI by making ethical responsibility a two-way obligation
- Reduces the likelihood of high-profile AI safety failures that emerge from deployment decisions made under deadline pressure over technical objections
- Attracts and retains engineers who take professional responsibility seriously and would otherwise leave organisations that coerce deployment
- Aligns organisational practice with emerging professional standards and regulatory expectations around AI engineer accountability
Risks of Not Having This Standard
- Engineers deploy AI systems with known problems under management pressure, creating liability for the organisation and harm for users
- Safety concerns are suppressed because engineers believe raising them will damage their careers or project standing
- The organisation loses its most ethically engaged engineers, who leave rather than compromise their professional standards
- AI incidents occur that could have been prevented if the engineer who identified the risk had felt empowered to act on it
- The organisation faces regulatory and legal consequences when it emerges that safety concerns were raised and overridden without due process
CMMI Maturity Model
Level 1 – Initial
| Category |
Description |
| People & Culture |
- Deployment decisions are driven by deadlines and management authority; engineer concerns are treated as obstacles rather than signals |
| Process & Governance |
- No formal mechanism for an engineer to record or escalate a safety objection; informal concerns are resolved informally |
| Technology & Tools |
- No tooling supports the capture or tracking of deployment safety concerns |
| Measurement & Metrics |
- No measurement of whether deployment safety concerns were raised, recorded, or resolved before deployment |
Level 2 – Managed
| Category |
Description |
| People & Culture |
- Team leads acknowledge that engineers may have legitimate safety concerns and create informal space for them to be heard |
| Process & Governance |
- A process for raising a deployment concern is informally understood; concerns are discussed in deployment review meetings |
| Technology & Tools |
- Deployment records include a field for open concerns or objections; this is reviewed before deployment sign-off |
| Measurement & Metrics |
- The number of deployment safety concerns raised is tracked informally; patterns are discussed in retrospectives |
Level 3 – Defined
| Category |
Description |
| People & Culture |
- The right to raise a deployment safety concern is explicitly stated in the team charter and communicated during onboarding |
| Process & Governance |
- A formal safety concern escalation procedure defines how concerns are recorded, reviewed by a neutral party, and resolved before deployment can proceed |
| Technology & Tools |
- A deployment safety concern form is integrated into the deployment pipeline; concerns block deployment until resolved or accepted with documented risk |
| Measurement & Metrics |
- Safety concern raise rate, resolution outcome (concern accepted vs overridden), and time to resolution are tracked per deployment |
Level 4 – Quantitatively Managed
| Category |
Description |
| People & Culture |
- Engineers who raise safety concerns receive positive recognition; concern outcomes are reviewed in retrospectives to assess process fairness |
| Process & Governance |
- Overridden safety concerns require C-level or governance board sign-off with documented rationale; the decision is reviewable by an independent party |
| Technology & Tools |
- Anonymised concern reporting is available; concern trend analysis identifies systemic issues in the deployment process |
| Measurement & Metrics |
- Rate of safety concern overrides, post-deployment incident correlation with prior concerns, and engineer satisfaction with the resolution process are measured |
Level 5 – Optimising
| Category |
Description |
| People & Culture |
- The organisation benchmarks its concern culture against responsible AI leaders; engineers recommend the organisation as an ethical AI employer |
| Process & Governance |
- The concern resolution process is continuously improved based on engineer feedback and retrospective analysis of concern outcomes versus deployment incidents |
| Technology & Tools |
- AI-assisted concern triage helps leadership prioritise and respond to concerns efficiently without bureaucratic delay |
| Measurement & Metrics |
- Long-term correlation between safety concern culture metrics and AI incident rate is tracked to demonstrate the ROI of a strong safety voice culture |
Key Measures
- Number of deployment safety concerns raised per quarter through formal channels
- Percentage of safety concerns resolved before deployment (versus overridden with documented risk acceptance)
- Post-deployment incident rate for systems where safety concerns were raised and overridden (versus those with no concerns)
- Engineer satisfaction score with the safety concern resolution process measured in team surveys
- Time from safety concern submission to formal resolution per concern