← System Thinking & Design

Org Design Anti-Patterns

The structures that look sensible and silently destroy flow.

Most org design problems are not caused by bad intentions - they are caused by structures that made sense at one scale but were never updated as the organisation grew.

Why Anti-Patterns Matter

Anti-patterns are not mistakes. They are solutions - solutions to a real problem at a point in time, which have become problems themselves because the context changed and the structure didn't.

A small startup with one engineering team and one product area does not need stream-aligned team design. The team is the stream. Everyone talks to everyone. The architecture is whatever the team builds it to be, because there is only one team to build it. This is not an anti-pattern. This is the appropriate structure for the size.

As the organisation grows - more people, more products, more specialisms, more complexity - the structures that worked at scale one begin to create friction at scale two. The single team becomes multiple teams. The informal communication that held everything together becomes impossible to maintain at scale. The shared codebase becomes a coordination bottleneck. The "we all know how this works" culture becomes "only three people actually know how this works."

The anti-patterns described here emerge in this transition - when organisations grow without redesigning. They persist because they are invisible to the people inside them. When you have always worked in a component team structure, it feels normal. The coordination overhead is just "how things work." The slow lead times are attributed to technical debt or headcount or tooling. The structural cause goes unexamined because it is the water everyone is swimming in.

By the time these patterns are visible, they have usually been in place for years. The cost to change them is high: team identities are attached to the structure, career paths are built around it, the system architecture has evolved to match it, and the conversation about changing it requires admitting that a long-standing decision was wrong. This is why they persist. It is not because nobody is smart enough to see them. It is because seeing them requires the kind of systemic perspective that daily operational pressure does not encourage.

Naming them matters. It gives leaders and practitioners the vocabulary to have the conversation. It separates the structural diagnosis from the people involved, which makes the conversation more productive. And it provides a shared reference point for what "better" looks like.


The Component Team Trap

What It Is

A component team owns a layer of the technology stack. A front-end team owns the UI. A back-end team owns the APIs and business logic. A data team owns the data layer. An infrastructure team owns the deployment and runtime environment. A QA team owns testing.

This structure is extremely common in software engineering organisations. It is also, in most contexts, a significant impediment to fast delivery.

Why It Forms

Component teams form for intuitive reasons. Grouping front-end engineers together means they can share knowledge, maintain consistent standards, and develop shared tooling. Grouping back-end engineers together means they can reason about the overall service architecture. Grouping data engineers together means they can maintain consistent data modelling practices. These are real benefits.

The structure also maps neatly to how most technical leaders think about the system: as a layered architecture, with each layer requiring distinct expertise. When the org chart mirrors the architecture, it feels orderly.

What It Costs

End-to-end delivery requires changes to all layers simultaneously. A new feature - almost any meaningful new feature - requires a UI change, an API change, a data model change, and a deployment update. That means four teams must coordinate, sequence their work, and align their releases.

Each team has its own backlog. Its own priorities. Its own definition of "ready." Its own sprint cadence. Its own release schedule. Coordinating across them requires planning sessions, dependency tracking, alignment meetings, and integration environments. Each of these is overhead that does not exist in a stream-aligned team structure.

More insidiously: each team optimises for its own layer. The back-end team designs APIs that make sense for the back end, not APIs that the front end can use without pain. The data team creates schemas that are clean in a database sense, but don't reflect how the business logic team needs to query the data. The infrastructure team prioritises stability and rejects changes that introduce operational risk, regardless of the business value. Each optimisation is locally coherent and globally incoherent.

Lead time in a component team structure is dominated by queue time between layers. The feature sits in the front-end backlog waiting to be picked up. Then it sits in the API team's queue waiting for a review. Then it waits for the data team to provision the schema change. Then it waits for the infrastructure team to provision the environment. The active development time might be two weeks. The elapsed time is three months. And in each queue, the feature loses its place if any team's priorities shift.

How to Escape It

The escape from the component team trap is to reorganise around value streams rather than technical layers. Instead of a front-end team, a back-end team, and a data team, you have product teams - each of which is full-stack and owns the entire delivery for its area.

This requires two things that are harder than they sound: first, the teams need to have the full-stack capability to actually own end-to-end delivery. This usually requires some upskilling and some careful hiring. Second, the platform needs to be good enough that each team is not re-solving infrastructure problems that should be solved once. Platform investment is the prerequisite for stream-aligned team effectiveness.

The transition is not instant. You can move toward this structure incrementally by embedding engineers from component teams into product teams, by reducing the number of hand-offs required, and by investing in the platform capability that reduces each team's infrastructure overhead.


The Shared Services Bottleneck

What It Is

A shared services team provides a capability - security review, data analysis, legal review, architecture governance, database administration, UX design - to every other team in the organisation. Every team that needs this capability must go through this team.

Why It Forms

Centralising specialist capabilities seems efficient. Why have a security expert on every team when you can have a security team that serves all teams? The skill is expensive to acquire. The demand from any single team is insufficient to justify a full-time hire. Centralising it seems to solve the problem.

What It Costs

The shared services team cannot scale with the demand placed on it. Each stream-aligned team generates demand. The shared services team is one team. As the number of stream-aligned teams grows, demand grows linearly while capacity is fixed. The queue grows. Lead times grow.

The shared services team has to prioritise. Some requests are urgent. Some are large. Some are routine. The prioritisation is always contested, because every team believes its need is urgent and important. The shared services team spends a disproportionate amount of its time managing the queue and communicating about prioritisation rather than delivering the capability.

Because the shared services team is always behind, stream-aligned teams learn to work around it. They defer security review until the end of a project - when addressing findings is expensive. They push data requests through informal channels, bypassing the queue at the cost of consistency. They interpret "waiting for the security team" as permission to continue without the review. The capability that was supposed to be centralised and rigorous becomes inconsistent and bolted on.

The shared services team is also always last in line for investment. It is not a revenue-generating function. Its contribution is invisible when everything is working and visible only when something goes wrong. When budget is tight, it loses headcount. Its queue grows longer. Its influence on delivery diminishes.

How to Fix It

The fix depends on the nature of the shared capability.

If it is a practice - security hygiene, testing discipline, accessibility standards - the capability should be distributed into stream-aligned teams, with an enabling team that raises the capability floor across teams and a platform that embeds the practice into the delivery workflow. Security scanning in the pipeline is more effective and more scalable than a manual security review by a central team.

If it is a platform capability - database provisioning, environment management, infrastructure as code - it should be abstracted into the internal developer platform and made available as self-service. The central team becomes a platform team. The interaction model changes from "submit a ticket and wait" to "here is the API; use it."

If it genuinely requires specialist involvement - in specific circumstances - the specialist team should interact via X-as-a-Service for routine cases and Collaboration for complex or novel cases. The default should be self-service. The specialist team's time should be reserved for genuinely complex problems, not routine compliance checks.


The Project Team Problem

What It Is

A project team is a temporary group assembled to deliver a specific initiative. The team delivers, the project closes, and the team disbands or reassigns. What was built is handed to a "run" function - a separate team responsible for operating it.

Why It Forms

Project-based delivery is deeply embedded in how most organisations fund and staff work. Capital expenditure budgets are more available than operational expenditure budgets. Business stakeholders find it easier to approve a project with a defined scope and end date than to approve a permanent team with an ongoing mandate. Project management as a discipline is built around this model.

What It Costs

Knowledge evaporates at project end. The engineers who built the system understood how it worked, why decisions were made, and where the sharp edges are. When they disperse to other projects, that knowledge is partially lost. Documentation captures some of it - but only some. The nuance of why a particular design choice was made, what alternatives were considered and rejected, what the edge cases are - this does not survive transition reliably.

The run team inherits debt it did not create. The project team had no incentive to invest in operability. They would not be the ones operating it. Test automation, monitoring, runbooks, deployment simplicity - these were nice-to-haves in the project, not requirements. The run team inherits a system that is poorly instrumented, incompletely tested, and difficult to change safely. They are accountable for a system they did not build and cannot fully understand.

Continuity of improvement is broken. A product is never finished. It needs to evolve based on user feedback, changing requirements, and growing understanding of the domain. The project team is gone. The run team is in maintenance mode - fixing bugs, responding to incidents, making small changes. No one is invested in the product's trajectory. It degrades slowly.

How to Fix It

Durable product teams that own products through their full lifecycle are the structural answer. The team that builds it operates it. The team that operates it improves it. Accountability is continuous rather than episodic.

This requires a shift in funding model - from project-based to product-based funding - which is a significant organisational change, not just a team design change. But the team design change can precede the funding model change by ensuring that teams do not disband when a project scope is "complete" and that operational accountability is held by the team that delivered the capability.


The Coordination Overhead Spiral

What It Is

As teams proliferate and dependencies between them grow, the time spent coordinating work - in meetings, in ticket exchanges, in Slack threads, in dependency management - grows disproportionately to the time spent doing the work. Delivery slows. The organisation responds by adding more coordination mechanisms: PIs, programme managers, alignment sessions, integration environments, dependency tracking tools. The coordination overhead grows further.

Why It Forms

Dependencies between teams are a normal consequence of system complexity. Not all dependencies can be eliminated. The problem is when dependencies are allowed to accumulate without structural intervention.

Dependencies form when team boundaries do not match the natural boundaries of the work. When a team's scope of ownership is too narrow relative to the features it is asked to deliver, it depends on other teams for parts of those features. When the platform is insufficient, every team has informal dependencies on whichever team knows how to navigate the infrastructure. When ownership of shared components is unclear, multiple teams form dependencies on whoever happens to have the knowledge.

How to Break the Spiral

Coordination overhead is a symptom. The disease is inappropriate team boundaries and insufficient platform capability. Address those:

  • Redesign team boundaries so that routine delivery does not require cross-team coordination. If every significant feature requires four teams, the teams are scoped wrong.
  • Invest in the platform so that infrastructure and operational concerns are self-service. If platform dependencies are a significant source of coordination, the platform is underinvested.
  • Define explicit interaction modes between teams. Unstructured collaboration is the most expensive interaction mode. Define what kind of interaction each team relationship requires and ensure it is the cheapest mode that meets the need.

Adding more coordination tools or programme management resources to a coordination-bottlenecked system is treating the symptom. It adds overhead rather than removing it.


The Reorg Without Redesign

What It Is

Leadership announces a reorganisation. Team boxes on the org chart are rearranged. Reporting lines change. Team names change. Portfolios shift. The underlying ownership, interfaces, incentives, and system architecture remain unchanged. Six months later, delivery is still slow, coordination overhead is still high, and the same frustrations exist - now with the added confusion of people not knowing who their new peers are.

Why It Happens

Reorgs are the default response to delivery problems when the diagnosis is unclear or the real fix is too expensive. Moving boxes is fast. Changing system architecture, redefining ownership, rebuilding team capability, redesigning incentive structures - these take months. A reorg can be announced in a town hall and completed in a few weeks.

Reorgs also serve political purposes. New leaders restructure organisations to signal intent. Underperforming areas are reorganised to create the appearance of action. A troubled team is put under a new manager who is supposed to fix it. The structural problem is unchanged. The leadership signal has been sent.

What Actually Needs to Change

A genuine redesign of an engineering organisation requires:

Ownership redesign. Who owns what? Which team is the accountable owner of each service, each domain, each capability? Ownership that is unclear or shared is a structural problem that a reorg cannot fix unless it explicitly addresses it.

Interface redesign. How do teams interact? What is the interaction mode for each significant team relationship? What can teams get from each other, and through what mechanism? These must be explicitly defined, not assumed to follow from the new org chart.

Incentive alignment. What are teams measured on? If stream-aligned teams are measured on individual team output rather than end-to-end flow, they will optimise for local metrics at the expense of the value stream. Measurement and incentive structures must align with the structural change, or the structural change will not stick.

Technical alignment. The system architecture must evolve to match the new team structure, or Conway's Law will pull it back toward the old structure. Technical work - decomposing shared codebases, defining explicit APIs, migrating to ownership-aligned architectures - must be sequenced alongside the organisational change.


How to Diagnose Your Current Anti-Patterns

Work through this checklist honestly. A "yes" answer to any of these questions is a signal that the named anti-pattern is present.

Component Team Trap:

  • Does delivering a significant new feature require coordinating across more than two teams?
  • Do teams own a technical layer (front end, back end, data) rather than a product or domain?
  • Is there a separate QA team that all other teams depend on for testing?

Shared Services Bottleneck:

  • Is there a team with a ticket queue that other teams complain about regularly?
  • Do teams defer security, architecture, or compliance review until the end of a project?
  • Is there a team that cannot say no to requests but also cannot keep up with them?

Project Team Problem:

  • Do teams disband or reassign after delivering a project?
  • Is operational responsibility handed to a different team than the one that built the system?
  • Is there a "build vs. run" distinction embedded in the org structure?

Coordination Overhead Spiral:

  • Do teams spend more than 20% of their time in coordination activities (meetings, dependency management, alignment sessions)?
  • Are there more than two levels of dependency between a team and the ability to release independently?
  • Has the answer to slow delivery been to add more programme management or coordination tooling?

Reorg Without Redesign:

  • In the last two years, has a reorg been followed by a period where things got worse before they got better - and the "getting better" part was much longer than expected?
  • Are there teams that changed their name and their reporting line but have the same people, the same backlog, and the same friction as before?
  • After a reorg, did anyone explicitly redesign the ownership structure, team interfaces, and incentive model?

Connection to Your Operating Model

Anti-patterns are the gap between the operating model you have and the operating model you want. They persist not because they are invisible - eventually, the symptoms become undeniable - but because the structural causes are hard to see from inside the system, and the interventions required are harder than the symptoms suggest.

This is why the framing of systems versus individuals matters. The people inside these anti-pattern structures are not failing. They are adapting - sensibly, locally - to a system that is imposing structural constraints on them. Blaming the engineers for slow delivery in a component team structure is like blaming the water for being slow to flow through a kinked hose. The kink is the problem.

The value of having this vocabulary - component team trap, shared services bottleneck, project team problem, coordination overhead spiral, reorg without redesign - is that it enables a structural conversation. It separates the diagnosis from the people and makes the intervention discussable.

Conway's Law predicts these patterns. When teams are structured around technical layers, the system will have clearly delineated layers with poorly integrated boundaries. When shared services own critical capabilities, the system will have centralised bottlenecks. When project teams are the norm, the system will be difficult to change safely because nobody owns the ongoing health of what was built.

The operating model you want - fast flow, independent deployability, clear ownership, continuous improvement - requires structural conditions to be present. Anti-patterns are the structural conditions that prevent those outcomes. Diagnosing and addressing them is the prerequisite for everything else.