Let me be blunt: most organisations are doing DORA wrong. Not because they're measuring the wrong things - but because they're using the right measures in entirely the wrong way.
DORA - the DevOps Research and Assessment programme - has produced some of the most rigorous, longitudinal research in software engineering. The four key metrics (Deployment Frequency, Lead Time for Change, Change Failure Rate, and Mean Time to Recovery) are genuine indicators of software delivery capability. They correlate with organisational performance. They matter.
And yet, in the wild, I see them abused constantly.
The moment you put DORA metrics on a dashboard visible to senior leadership, something shifts. Teams stop trying to improve their system and start trying to improve their number. The metric becomes the target. And as Goodhart's Law reminds us - when a measure becomes a target, it ceases to be a good measure.
I've seen teams inflate deployment frequency by splitting non-meaningful work into smaller batches - not to improve flow, but to report higher numbers. I've seen Change Failure Rate suppressed by reclassifying incidents. I've seen Lead Time for Change measured from "code committed" rather than "work started", hiding the real queue.
The metric didn't fail. The incentive structure failed.
A Deployment Frequency of once per day is elite for a monolithic banking core system and mediocre for a modern SaaS product. Comparing your number to a benchmark is largely meaningless unless your context - your architecture, your team structure, your risk profile - is comparable.
What matters is not where you are on the DORA spectrum today. What matters is the direction of travel and the reason for your current position.
An organisation that deploys weekly and knows exactly why - and has a credible path to daily - is in a far better position than one that deploys daily but has no idea why their Change Failure Rate keeps creeping up.
Here's the deeper problem. DORA metrics are system-level indicators. They tell you about the health of your entire software delivery system - the pipeline, the architecture, the organisational structure, the culture, the governance, the deployment infrastructure.
But most organisations try to improve them at the team level.
You cannot improve your Lead Time for Change by coaching a single team to work faster. Lead Time for Change is dominated by handoffs, approvals, environment wait times, and integration bottlenecks - none of which a team controls in isolation. Improving it requires changing the system those teams operate within.
This is why BVSSH - Better Value Sooner Safer Happier - is the right lens. Sooner is a system outcome. Achieving it requires system-level thinking. That means looking at your change approval process, your deployment pipeline, your test automation coverage, your architecture, your team dependencies - all of it, together.
I've coined a term - DORA washing - to describe organisations that adopt the vocabulary of DORA without the intent. They instrument the metrics, run the surveys, publish the dashboards. And nothing changes underneath.
This isn't cynicism. It's a pattern. Measurement without action is theatre. And DORA, used as a reporting mechanism rather than a diagnostic one, becomes expensive theatre.
The organisations that do this well use DORA metrics as questions, not answers. Deployment Frequency is low - why? Is it pipeline speed? Approval overhead? Risk aversion? Architecture coupling? Each answer points to a different intervention. The metric tells you something is wrong. It does not tell you what to fix.
The DORA programme's research is clear: elite performers - those in the top quartile across all four metrics - are significantly more likely to exceed their commercial goals, achieve superior reliability, and report higher employee satisfaction.
That last point is not incidental. The Happier outcome in BVSSH is deeply connected to how software is delivered. Teams that can deploy safely and frequently, that don't live in fear of their release pipeline, that can recover from incidents without blame - those teams are happier. The correlation is robust.
Use DORA to understand your system. Use it to find constraints. Use it to drive the right conversations - not with engineers, but with the people who control the architecture, the governance, the investment in platform capability.
That's what DORA is for. Not a leaderboard. Not a performance review. A diagnostic for system health.
If you're using it any other way, you're optimising the wrong thing.
Engineering leader blending strategy, culture, and craft to build high-performing teams and future-ready platforms. I drive transformation through autonomy, continuous improvement, and data-driven excellence - creating environments where people thrive, innovation flourishes, and outcomes matter. Passionate about empowering others and reshaping engineering for impact at scale. Let’s build better, together.