Practice : AI Risk Assessment
Purpose and Strategic Importance
AI systems can cause harm in ways that are not immediately obvious — through biased outputs, unintended misuse, failure under edge conditions, or erosion of user trust. Without a structured approach to identifying and evaluating these risks before a system is built or deployed, teams make implicit trade-offs that are never examined, documented, or revisited. AI Risk Assessment creates the space for teams to surface, name, and respond to risks deliberately rather than reactively.
Risk assessment also serves a governance function: it creates an auditable record of what risks were considered, what mitigations were applied, and who was accountable for those decisions. This is increasingly required by regulation — from the EU AI Act to sector-specific guidance in financial services and healthcare — and is a foundation for building public and organisational trust in AI systems over time.
Description of the Practice
- Conducts structured evaluation of potential harms, biases, failure modes, and misuse vectors before AI development begins and again before deployment.
- Classifies AI use cases by risk tier (e.g., low, medium, high, unacceptable) based on severity, reversibility, and breadth of potential impact.
- Engages stakeholders from legal, ethics, product, and domain functions to ensure risk identification is multi-perspective and not purely technical.
- Documents risk assessments as living artefacts that are reviewed and updated at key lifecycle milestones, not treated as one-time compliance exercises.
- Defines risk-proportionate controls — ranging from monitoring and human oversight to outright rejection of use cases that exceed the team's risk appetite.
How to Practise It (Playbook)
1. Getting Started
- Adopt or adapt a risk taxonomy that covers harm categories relevant to your domain (e.g., fairness, privacy, safety, security, operational reliability).
- Create a lightweight risk assessment template that teams complete at the outset of any new AI initiative, before significant investment is made.
- Assign a risk owner for every AI use case — someone accountable for ensuring the assessment is completed, reviewed, and kept current.
- Start with your highest-stakes existing AI system and conduct a retrospective risk assessment to establish a benchmark and build team familiarity with the process.
2. Scaling and Maturing
- Integrate risk assessment into your AI intake process so no new use case progresses to development without a completed and reviewed assessment.
- Build a risk register that consolidates assessments across all AI systems, enabling portfolio-level visibility of accumulated risk exposure.
- Establish escalation criteria that automatically trigger review by senior stakeholders or an ethics board when risk scores exceed defined thresholds.
- Periodically review the risk taxonomy itself to ensure it reflects evolving regulatory requirements, incident learnings, and changes in the AI landscape.
3. Team Behaviours to Encourage
- Treat risk assessment as a collaborative act, not a checkbox completed by a single analyst or a compliance team in isolation.
- Encourage engineers to raise risk concerns early and without fear of being seen as blockers — the goal is informed decision-making, not veto.
- Normalise updating risk assessments when material changes occur — new training data, a change in user population, or a shift in deployment context.
- Celebrate examples where risk assessment prevented harm or prompted a better design decision, reinforcing its value to the team.
4. Watch Out For…
- Risk assessments becoming a rubber-stamp exercise completed after decisions have already been made rather than informing them.
- Over-reliance on technical risk categories (model accuracy, system failure) while neglecting social and ethical harms.
- Assessments that are too granular and burdensome for low-risk systems, creating a culture of avoidance rather than engagement.
- Treating risk assessment as a one-time gate rather than an ongoing governance obligation tied to each material change in the system.
5. Signals of Success
- Teams complete risk assessments before development begins as a matter of course, without prompting from governance functions.
- Risk assessments have led to concrete design changes, deployment constraints, or rejection of use cases — demonstrating real influence.
- Stakeholders outside engineering (legal, product, operations) actively participate in and contribute to risk assessment reviews.
- The risk register is maintained and current, with clear ownership and a visible history of review dates and decisions.
- Teams can articulate the risk profile of their AI systems confidently when asked by leadership, auditors, or external partners.