• Home
  • BVSSH
  • C4E
  • Playbooks
  • Frameworks
  • Good Reads
Search

What are you looking for?

Standard : Number of Learning Experiments per Quarter

Description

Number of Learning Experiments per Quarter tracks how frequently a team or department runs deliberate experiments designed to test a hypothesis, reduce uncertainty, or improve flow, performance, or outcomes.

These experiments can take the form of technical spikes, proof-of-concepts, A/B tests, infrastructure pilots, or process changes, as long as they are timeboxed, measurable, and intended to inform decisions or create improvement.

This measure reflects the maturity of a continuous improvement culture and the commitment to innovation through evidence.

How to Use

What to Measure

  • Count the number of discrete experiments run in a quarter by each team or domain.
  • Experiments should have a clearly stated hypothesis, scope, and method of evaluation.
  • Types may include: architecture alternatives, tooling trials, delivery workflow changes, or new feature behaviours.

Formula

Number of Experiments = Count of Timeboxed, Hypothesis-Driven Activities Logged in a Quarter

Segment by:

  • Team, platform, product, or value stream
  • Experiment type (technical, product, process)

Instrumentation Tips

  • Use experiment templates or lightweight tracking (e.g. Confluence pages, GitHub Discussions, Notion).
  • Review retrospectives, standups, and demos for new experiment activity.
  • Maintain a shared backlog or registry of improvement experiments with learnings recorded.

Why It Matters

  • Promotes innovation: Encourages safe-to-fail exploration and forward-thinking solutions.
  • Supports agility: Helps teams test assumptions before committing to full-scale implementation.
  • Drives ownership: Empowers engineers to initiate improvements and refine delivery.
  • Reduces risk: Avoids wasteful investments by validating ideas early and incrementally.

Best Practices

  • Use a consistent lightweight format: hypothesis, approach, expected outcome, and how you’ll know.
  • Pair with a ‘Retrospective of the Experiment’ to assess learnings and value gained.
  • Celebrate experiments that produced learnings, even if they didn’t succeed.
  • Allocate protected time (e.g. 10–20 percent of each sprint) for experimentation.

Common Pitfalls

  • Counting all investigations or side tasks as “experiments” without clear structure or intent.
  • No follow-through on learnings or decision-making based on experiment results.
  • Teams deprioritising improvement work in favour of only feature delivery.
  • Lack of visibility or recognition for learning work.

Signals of Success

  • Teams regularly run and share the outcomes of small, measured experiments.
  • Experimentation is a normalised part of delivery, not an exception.
  • Engineering improvements are driven by evidence and learning, not gut feel alone.
  • Outcomes (positive or negative) from experiments are informing broader practice changes.

Related Measures

  • [[Experiment-to-Adoption Ratio]]
  • [[CoE/Agile/Measures/Adaptability/Retrospective Action Completion Rate]]
  • [[Uplift from Experiments]]
  • [[Engineering Learning Hours per Person]]
  • [[Number of Spikes Completed vs Planned]]

Technical debt is like junk food - easy now, painful later.

Awesome Blogs
  • LinkedIn Engineering
  • Github Engineering
  • Uber Engineering
  • Code as Craft
  • Medium.engineering