Policy : Govern AI Models Throughout Their Lifecycle, Not Just at Launch
Commitment to Lifecycle Governance of AI Models
Governance applied only at AI model launch is not governance — it is a deployment gate. The conditions under which a model was approved may change substantially after launch: the business context evolves, the data distribution drifts, regulatory requirements update, new failure modes are discovered, and the model's performance profile changes with retraining. Each of these changes can invalidate the assumptions on which the original approval was based. Our commitment is to govern AI models continuously throughout their lifecycle — from initial development through deployment, ongoing operation, and eventual retirement — treating each model as a governed asset rather than a one-time compliance hurdle.
What This Means
Lifecycle governance means maintaining active oversight of deployed AI models on an ongoing basis. It means tracking model versions, detecting performance drift, conducting regular fitness-for-purpose reviews, and making explicit decisions about when models need retraining, replacement, or retirement. It means ensuring that the governance record for each model is kept current — not frozen at the point of initial approval while the model itself continues to change.
Our commitment to governing AI models throughout their lifecycle is built on:
- Model Registry and Version Control – Every production AI model is registered in a central model registry with complete version history: training data version, evaluation results, deployment dates, configuration, and change log. The registry is the authoritative record of what is deployed and under what conditions it was approved.
- Scheduled Fitness-for-Purpose Reviews – Every production model undergoes periodic fitness-for-purpose review at defined intervals — at minimum annually, more frequently for higher-risk or rapidly changing domains. Reviews assess whether the model's performance, risk profile, and operating conditions remain within the bounds of its original approval.
- Drift Detection and Response – Production models are monitored for data drift and concept drift. When drift is detected above defined thresholds, it triggers a governance review — not just a technical response. Governance review determines whether the model remains approved for its current use case given changed conditions.
- Material Change Re-Approval – When a model undergoes material change — significant retraining, change to feature inputs, change to operating scope, or change to downstream decision processes — that change triggers re-approval through the governance process appropriate to the model's risk tier. Incremental improvements that do not change the material risk profile may follow a lighter-touch review.
- Regulatory Change Monitoring – We actively monitor for regulatory changes that may affect the approval status of deployed models. When new regulations, guidance, or enforcement actions are relevant to a deployed model, they trigger a governance review to assess whether the model remains compliant.
- Retirement Criteria and Process – Every model has defined retirement criteria: conditions under which the model will be taken out of production and replaced or decommissioned. Retirement decisions are made proactively through governance review, not reactively after a significant failure. Retired models are decommissioned cleanly, with audit records preserved.
- Governance Accountability – Every production model has a named governance owner responsible for ensuring that lifecycle governance obligations are met. Governance ownership is tracked and succession-planned. Models without active governance owners are treated as governance risks requiring immediate remediation.
Why This Matters
AI models that were governed at deployment but not subsequently are among the highest-risk AI assets in an organisation. They may be operating on stale assumptions, producing outputs that would fail a current evaluation, or remaining in production long past the point at which they should have been retired. Regulatory frameworks increasingly recognise this risk and are mandating ongoing governance of deployed AI systems. Organisations that treat launch governance as sufficient will face increasing difficulty demonstrating regulatory compliance as the AI governance landscape matures.
Our Expectation
Every production AI model has an active governance record, a named governance owner, a scheduled next review date, and drift monitoring in place. Models that are not actively governed are not approved for continued operation. Governing AI models throughout their lifecycle — not just at launch — is how we ensure our AI systems remain Safer, compliant, and fit for purpose throughout the full period of their deployment.