• Home
  • BVSSH
  • C4E
  • Playbooks
  • Frameworks
  • Good Reads
Search

What are you looking for?

Practice : AI Retrospectives

Purpose and Strategic Importance

AI work has specific challenges that generic retrospectives do not address well: the non-determinism of experiments, the difficulty of attributing outcomes to specific decisions, the long feedback loops between changes and observable results, and the ethical dimensions of AI system behaviour. AI-specific retrospectives create the space for teams to reflect on these challenges deliberately, surface learnings from experiments and incidents that cut across standard delivery retrospectives, and build the kind of psychological safety that encourages engineers to raise concerns about AI systems openly.

Retrospectives are also the mechanism through which AI teams build collective intelligence. Individual experiments and incidents produce learning for the people who were directly involved; retrospectives turn that individual learning into shared knowledge that improves the team's practice as a whole. Teams that retrospect seriously on their AI work compound their learning over time; teams that do not, repeat the same mistakes across different projects and individuals.


Description of the Practice

  • Conducts retrospectives that include AI-specific dimensions alongside standard delivery retrospectives: experiment quality and learning, model performance outcomes, safety and fairness observations, and technical practices.
  • Creates space to review both experiments that produced useful results and experiments that did not, treating failures as equally valuable learning opportunities and celebrating the quality of learning, not just the quality of outcomes.
  • Includes review of any safety, fairness, or ethical observations from the sprint or development period — normalising the discussion of AI-specific concerns alongside delivery and quality topics.
  • Produces concrete actions from retrospectives that are owned, tracked, and followed up in subsequent sessions, ensuring that reflection translates into improvement rather than just conversation.
  • Alternates between sprint-level retrospectives (frequent, tactical) and longer retrospectives at the end of major AI initiatives (comprehensive, strategic) to capture both short-cycle and long-cycle learning.

How to Practise It (Playbook)

1. Getting Started

  • Add AI-specific topics to your standard retrospective agenda: "what did we learn from experiments?", "how did our model perform in production?", "were there any safety or fairness concerns we should discuss?".
  • Explicitly celebrate experiments that produced clear learning, even when the result was negative — this is the fastest way to build a culture where honest failure is valued over manufactured success.
  • Ensure that retrospective actions are specific and owned rather than vague aspirations — "investigate why our model performance degraded during the sprint" is an action; "improve model quality" is not.
  • Build a lightweight retrospective log that records actions and their outcomes, enabling the team to see their own improvement trajectory over time.

2. Scaling and Maturing

  • Develop AI-specific retrospective formats that go beyond standard formats — experiment learning reviews, model performance post-mortems, and practice audits — giving teams richer structures for specific types of reflection.
  • Conduct periodic longer-form retrospectives at the end of major AI projects that review the full development arc: was the value hypothesis validated, what would we do differently, what capabilities have we built?
  • Share retrospective learnings across teams through communities of practice, internal conferences, and documented case studies — building organisational learning from individual team reflection.
  • Use retrospective data to identify systemic patterns — recurring themes across multiple teams or sprints — that inform investment in tooling, process improvement, or capability building.

3. Team Behaviours to Encourage

  • Create genuine psychological safety in retrospectives by separating reflection from performance evaluation — retrospectives are for learning, not for assessing individual contribution.
  • Encourage engineers to raise concerns about AI system safety or fairness in retrospectives, treating these as high-value observations rather than awkward topics to be deferred to specialist functions.
  • Be honest about uncertainty in retrospectives — "we don't know why performance changed" is a valid and valuable retrospective contribution that prompts investigation, not a failure to be minimised.
  • Follow through on retrospective actions consistently — the quality of a team's retrospectives is measured not by the quality of the discussion but by the quality of the improvements it produces.

4. Watch Out For…

  • Retrospectives that focus exclusively on delivery (velocity, stories completed) while treating AI-specific learning — experiments, model behaviour, safety observations — as out of scope.
  • Blameful retrospectives that attribute problems to individuals rather than to systems and processes, creating an environment where engineers avoid honest reflection to protect themselves.
  • Retrospective action items that are created but never completed, building a culture where retrospectives are performative rather than generative of real change.
  • Long gaps between retrospectives that allow problems and opportunities to accumulate beyond the point where they can be effectively reflected upon and acted on.

5. Signals of Success

  • AI-specific topics — experiment outcomes, model performance, safety observations — are regular components of retrospectives, not exceptional additions when problems arise.
  • Retrospective actions are completed at a high rate in subsequent sprints, demonstrating that reflection translates into practice improvement.
  • Engineers report feeling safe raising AI safety and fairness concerns in retrospectives, with concerns being taken seriously and acted upon rather than dismissed.
  • The team can point to specific practice improvements that originated from retrospective learnings — tooling changes, process improvements, convention changes — demonstrating the practice's value.
  • Retrospective learnings are shared beyond the immediate team, contributing to organisational knowledge and preventing other teams from repeating the same mistakes.
Associated Standards
  • AI teams operate with clear ownership and psychological safety
  • AI investment decisions are informed by value realisation data
  • AI work is recognised and celebrated as a team achievement

Technical debt is like junk food - easy now, painful later.

Awesome Blogs
  • LinkedIn Engineering
  • Github Engineering
  • Uber Engineering
  • Code as Craft
  • Medium.engineering