← All Book Reviews Book Review

Modern Software Engineering

David Farley

Modern Software Engineering cover

The manifesto for treating software development as an engineering discipline - not a guessing game

David Farley has spent decades building and thinking about high-performing software systems. He is one of the original authors of the Continuous Delivery book, a practitioner with deep scars from real delivery challenges, and one of the most rigorous thinkers working in software engineering today. Modern Software Engineering is his attempt to articulate - clearly, completely, and without compromise - what it actually means to do this work well.

The central provocation of the book is that most software development is not engineering. It is talented guessing - iterative, hopeful, and fundamentally dependent on intuition and experience rather than disciplined method. When it works, we call it software engineering. When it doesn't, we call it a failed project. Farley argues for something more rigorous: a set of first principles and techniques that, applied consistently, produce better outcomes regardless of context.

This is not a book about tools. It is a book about thinking. What does it mean to manage complexity? What are the properties of a system that make it possible to test and deploy with confidence? What does expertise look like in this domain, and how do you build it? These questions have answers. The answers are not as widely taught or as widely practised as they should be.


Why this book matters

The engineering excellence conversation has been dominated for years by practices - CI/CD, TDD, microservices, DevOps - without a coherent explanation of why those practices produce better outcomes. Teams adopt them (or fail to) without understanding the underlying principles, which means they can't adapt them intelligently to their context, can't evaluate new practices against the same standard, and can't explain to their organisations why they matter.

Farley provides the underlying theory. The book is built around two fundamental ideas: optimise for learning, and manage complexity. Everything else - the practices, the techniques, the architectural principles - flows from those two roots. When you understand why, you can evaluate any practice, any proposal, and any trade-off against the same set of first principles.

For engineering leaders, this book is the intellectual foundation that makes every other conversation easier. You are no longer arguing for a practice. You are arguing from a principle. That changes the quality of every technical and strategic discussion your team has.


Key insights

1. Software engineering is primarily about managing uncertainty

Farley's opening move is definitional: software engineering is not manufacturing. We are not building the same thing repeatedly to a known specification. We are exploring a problem space, discovering requirements, and building systems that will need to evolve in ways we cannot predict. The primary challenge is not execution of a known plan - it is navigation under genuine uncertainty.

This distinction has profound consequences. Manufacturing-derived project management - detailed upfront plans, fixed scope, stage-gate approval - is actively wrong for software development, not just suboptimal. It treats uncertainty as a planning failure when it is an irreducible property of the domain. The appropriate response to irreducible uncertainty is not better planning. It is faster feedback, smaller steps, and systems designed for change.


2. Optimising for learning requires fast, reliable feedback loops

If uncertainty is the fundamental challenge, then learning is the fundamental response - learning about the problem, about the solution, about the system, about customer behaviour. And learning requires feedback. The speed and reliability of feedback loops is therefore the primary determinant of engineering effectiveness.

Farley is specific: feedback loops should be short (measured in minutes, not days), reliable (a failing test means a real problem, not a flaky pipeline), and comprehensive (covering behaviour at the unit, integration, and acceptance levels). Every investment in test quality, build speed, and deployment automation is an investment in the rate at which a team can learn - which is ultimately the rate at which it can improve.

DORA's four key metrics are a direct expression of this principle: deployment frequency and lead time measure the speed of the learning loop; change failure rate and recovery time measure its reliability.


3. Modularity and abstraction are the primary tools for managing complexity

The second pillar of Farley's framework - managing complexity - is addressed through two principles that every senior engineer knows but few organisations apply consistently: modularity (separate things that change for different reasons) and abstraction (hide complexity behind well-defined interfaces).

These principles apply at every level of the stack: code, service, team, and organisation. A system that is hard to change is almost always a system where these principles have been violated - where concerns are mixed, where dependencies are hidden, where changing one thing requires understanding everything. The fix is rarely a rewrite. It is the patient, deliberate application of these principles to the highest-friction areas of the system.

The team and organisational equivalent is Conway's Law: organisations build systems that mirror their communication structures. If you want modular systems, you need modular teams.


4. Testability is an architectural property, not a testing afterthought

One of Farley's most important and most neglected arguments: testability is a design quality. A system that is difficult to test is difficult to test because of how it was designed - because of tight coupling, hidden dependencies, mixed concerns, and implicit state. Adding tests after the fact doesn't fix the underlying design. It papers over it.

The implication is that testability should be a first-class architectural concern - as important as performance, security, or scalability. Systems designed for testability are typically also more modular, more maintainable, and easier to understand. The disciplines reinforce each other. The organisations that produce reliable, high-quality software are not the ones with the best QA process; they are the ones that build testability into the design from the start.


5. Continuous Delivery is not a practice - it is a capability that reveals the quality of everything else

Farley's background in Continuous Delivery (he co-authored the foundational book on the subject) is visible throughout, but his framing here is particularly useful: CD is not a deployment practice. It is a quality probe that makes the health of your engineering system visible in real time.

If you cannot release software safely on demand, something is wrong - with your architecture, your testing, your processes, your team structures, or your organisational culture. CD doesn't cause those problems; it reveals them. The organisations that resist CD are usually the ones that most need the feedback it provides. The discomfort of implementing it is the discomfort of confronting what the system actually looks like, rather than what the planning documents say it looks like.


Thought-provoking takeaways

  • How long does it take your team to get feedback on a code change? From commit to test results? From commit to production? Each of these numbers is a proxy for your team's learning rate - and your long-term improvement trajectory.

  • How testable is your codebase - really? Not as a test coverage percentage, but as a design question: can you change any module in isolation, test it in isolation, and deploy it in isolation? If not, what is entangled that shouldn't be?

  • What is the most complex part of your system? Is that complexity essential (inherent in the domain) or accidental (accumulated through design decisions)? How much of your team's cognitive load is being consumed by accidental complexity?

  • When was the last time your team changed its technical approach because of something it learned - not from a tech conference, but from the production system's actual behaviour?

  • Does your architecture reflect your team structure, or does your team structure reflect your architecture? Which came first, and which needs to change?


Actions - for this week

  1. Measure your feedback loops. Time the gap from code commit to test results, to deployment, to customer feedback. Plot the current state. Now ask: what would need to be true to halve each of these?

  2. Pick one module that is hard to test and investigate why. Is it a coupling problem? A dependency problem? A state problem? The answer points directly to the design fix.

  3. Identify your highest-complexity area - the part of the system where changes are slowest, most risky, and most feared. Is that complexity essential or accidental? What one change would reduce accidental complexity the most?

  4. Have a conversation about principles, not practices. In your next technical discussion, when a practice comes up (TDD, trunk-based development, service isolation), ask: what principle does this serve? Can you articulate it? If not, find out - or reconsider the practice.

  5. Evaluate your deployment pipeline as a quality probe. What does it currently reveal about your system's health? What is it not revealing that it should?


"Software engineering is the application of an empirical, scientific approach to finding efficient, economic solutions to practical problems in software."

  • David Farley