• Home
  • BVSSH
  • C4E
  • Playbooks
  • Frameworks
  • Good Reads
Search

What are you looking for?

Standard : Experiment Velocity (Try–Learn–Improve Cycle Rate)

Description

Experiment Velocity measures the number of structured improvement experiments a team runs within a given timeframe. It reflects how often a team intentionally tests new ways of working, learns from the outcome, and adapts accordingly.

High experiment velocity indicates a curious, learning-oriented culture. Rather than sticking with habits or top-down mandates, the team tries new ideas, evaluates them, and integrates what works.

How to Use

What to Measure

  • Count the number of explicitly framed experiments initiated and completed per sprint, month or quarter.
  • An experiment typically includes:
    • A hypothesis: "We believe that doing X will improve Y."
    • A defined timeframe or sprint
    • A method to observe or measure the outcome
    • A review of what was learned and the decision to adopt, adapt or discard

Formula

Experiment Velocity = Number of Completed Experiments / Time Period

Example:

  • A team runs 6 experiments over 3 sprints → 2 experiments per sprint

Optional:

  • Track % of successful experiments adopted
  • Monitor learning outcomes even from "failed" tests

Instrumentation Tips

  • Maintain an "experiment board" with status and outcomes
  • Use structured templates for planning and reviewing experiments
  • Include a lightweight retro on each experiment
  • Log outcomes in team wiki or improvement log for future reference

Benchmarks

Experiment Velocity Interpretation
2+ per sprint High-velocity learning team
1 per sprint Healthy experimentation culture
1–2 per month Some learning, may be ad hoc
<1 per month Low experimentation, likely inertia or fear

Benchmarks may vary with team maturity and workload. Focus on quality over quantity.

Why It Matters

  • Accelerates learning
    Frequent experiments help teams rapidly test assumptions and evolve better ways of working.

  • Builds autonomy and mastery
    Teams feel empowered to change their environment and own the outcomes.

  • Strengthens adaptability
    Regular learning makes teams better prepared for change and uncertainty.

  • Reduces risk of stagnation
    Avoids long periods of unchallenged, ineffective habits or processes.

Best Practices

  • Keep experiments small, safe to fail, and timeboxed
  • Align experiments to current challenges or friction points
  • Share learnings across teams to reduce duplication
  • Track hypotheses and outcomes to improve future rigour
  • Encourage psychological safety to support “learning over being right”

Common Pitfalls

  • Treating every change as an experiment without clear learning intent
  • Skipping outcome review, so learning is lost
  • Always measuring success as “adoption” rather than learning
  • Avoiding experiments due to fear of failure or judgment

Signals of Success

  • Teams propose and run experiments unprompted
  • Experiments lead to measurable improvements or insight
  • Learning from experiments is documented and reused
  • Stakeholders and leadership support a test-and-learn culture

Related Measures

  • [[CoE/Agile/Measures/Continuous Improvement/Retrospective Action Completion Rate]]
  • [[Improvement Initiative Throughput]]
  • [[Learning Investment Ratio]]
  • [[Innovation Adoption Rate]]

Aligned Industry Research

  • Lean Startup (Eric Ries)
    Pioneered the idea of build-measure-learn cycles as the foundation of innovation.

  • Team Topologies
    Supports platform and enabling teams in helping others experiment safely and effectively.

  • Continuous Discovery Habits (Teresa Torres)
    Reinforces the value of weekly small bets and regular learning cycles within product teams.

Technical debt is like junk food - easy now, painful later.

Awesome Blogs
  • LinkedIn Engineering
  • Github Engineering
  • Uber Engineering
  • Code as Craft
  • Medium.engineering