• Home
  • BVSSH
  • C4E
  • Playbooks
  • Frameworks
  • Good Reads
Search

What are you looking for?

Practice : AI Working Agreements

Purpose and Strategic Importance

AI work raises questions that do not arise in conventional software development: when should a model be retrained? Who has the final say on a fairness trade-off? What happens when an engineer has a safety concern about a model they are being asked to deploy? How does the team handle the non-determinism and uncertainty that are inherent in machine learning? Without explicit working agreements that address these questions, teams answer them inconsistently — or do not answer them at all, leaving engineers in ambiguous situations that erode trust and psychological safety.

Working agreements are also a mechanism for embedding responsible AI principles into day-to-day practice rather than leaving them as aspirational framework statements. When the team has agreed that "any engineer can raise a safety concern about a deployment and it will be treated as a quality issue, not an obstructive opinion", this is not merely a policy — it is a behavioural contract that shapes how people actually work.


Description of the Practice

  • Develops explicit agreements about how the team approaches AI-specific decisions: model deployment approval, safety concern escalation, fairness trade-off resolution, and experiment design standards.
  • Defines the team's shared norms for code and model review — what standards apply, who reviews what, and how disagreements are resolved — reflecting the specific rigour required for AI artefacts.
  • Establishes agreements about how the team handles uncertainty — in model outputs, in experiment results, and in ethical assessments — creating shared language and processes for navigating ambiguity.
  • Reviews and refreshes working agreements regularly — at retrospectives and at the start of new projects — ensuring they remain current and genuinely reflect the team's practices.
  • Makes working agreements visible and accessible, not buried in a document nobody reads — posting them in the team's communication channels, wikis, and physical spaces where they prompt regular reference.

How to Practise It (Playbook)

1. Getting Started

  • Facilitate a working agreements workshop with the full team, using prompts specific to AI work: "what should happen if an engineer has a safety concern about a deployment?", "who needs to approve a model going to production?", "how do we handle experiments that produce surprising results?".
  • Draft agreements collaboratively rather than having a manager or lead draft them top-down — agreements that the team has co-created carry more legitimacy and are more likely to be followed.
  • Start with a small set of the most important agreements — five to seven — rather than trying to codify every aspect of team working, which produces a document too long to remember or reference.
  • Make the first version explicit that it is provisional and will be updated based on experience — working agreements should evolve, and the first iteration is a starting point, not a finalised constitution.

2. Scaling and Maturing

  • Extend working agreements to cover the interfaces between the AI team and other teams — how data requests are made and fulfilled, how model deployments are communicated to operations, how ethics review requests are submitted and processed.
  • Build working agreements into onboarding for new team members, ensuring that people joining the team understand the team's norms and their rationale from day one.
  • Create AI-specific variants of working agreements for different contexts — development projects, production operations, experimental research — recognising that appropriate norms differ across these contexts.
  • Track adherence to working agreements as part of retrospectives, identifying where agreements are working well, where they are being ignored, and where they need to change.

3. Team Behaviours to Encourage

  • Reference working agreements actively in day-to-day discussions — "our working agreement is that we don't deploy without a model card review" should be a normal way of resolving questions about process, not a bureaucratic appeal to authority.
  • Raise it when working agreements are not being followed — the appropriate response is a constructive conversation about whether the agreement needs to change or whether the specific situation warrants an exception, not silent non-compliance.
  • Review working agreements when the team's context changes significantly — a new project, a new stakeholder environment, a change in team membership — to ensure they remain appropriate.
  • Treat working agreements as a team asset that protects every member, not a set of constraints imposed on individuals — agreements that protect engineers from unsafe deployments are protecting those engineers, not limiting them.

4. Watch Out For…

  • Working agreements that are created in a workshop and then never referenced again, becoming a historical artefact rather than a living guide to team behaviour.
  • Agreements that are too vague to be actionable — "we will always consider ethics in our AI work" is not an agreement, it is a sentiment; "any engineer can pause a deployment for an ethics review, and the review will be completed within one sprint" is an agreement.
  • Teams that create working agreements for good-faith team members but do not build in protections for cases where agreements are violated under pressure, leaving engineers exposed when they invoke their rights under the agreement.
  • Working agreements that grow so large and comprehensive that they become unreadable and unenforceable — a short, clear set of agreements that the team can recall from memory is more valuable than an exhaustive document.

5. Signals of Success

  • Working agreements are referenced regularly in day-to-day team discussions, indicating that they are genuinely shaping team behaviour rather than existing as a formality.
  • Engineers report confidence that they can invoke working agreement protections — particularly around safety concerns — without fear of professional consequences.
  • Working agreements have been updated based on team experience, showing that the team treats them as living documents rather than fixed rules.
  • New team members can learn the team's working agreements during onboarding and understand not just what they are but why they exist and how they apply in practice.
  • The team's working agreements are visible and accessible, and team members can articulate the most important agreements without needing to look them up.
Associated Standards
  • AI teams operate with clear ownership and psychological safety
  • Engineers are not required to deploy AI systems they have safety concerns about
  • AI governance frameworks are documented and followed across the lifecycle

Technical debt is like junk food - easy now, painful later.

Awesome Blogs
  • LinkedIn Engineering
  • Github Engineering
  • Uber Engineering
  • Code as Craft
  • Medium.engineering