← Career & Capability

Learning & Development

Growth happens in the work. Not in the training catalogue.

Most L&D programmes fail because they treat learning as a separate activity from work. The evidence is clear: meaningful capability growth happens through stretch, reflection, and feedback - not courses.

How People Actually Develop

The 70/20/10 model was developed by Morgan McCall and colleagues at the Center for Creative Leadership in the 1980s, based on research into how successful executives described their development. The finding was consistent: roughly 70% of meaningful learning happened through challenging work and experience, 20% through relationships and feedback, and 10% through formal learning (courses, reading, training).

The model is descriptive, not prescriptive. It describes how development actually happens, not how organisations should allocate L&D budget. The implication for engineering organisations is straightforward: if you want to develop engineers, the primary mechanism is the work itself - specifically, work that stretches people beyond their current capability in a supported environment.

The reason most L&D programmes fail is not that they choose the wrong courses. It is that they operate entirely in the 10% - a catalogue of training options, a learning management system, an annual budget for conferences - while paying little attention to whether the 70% is structured to create genuine learning. An organisation with a rich training catalogue and no deliberate approach to stretch assignments is investing in the wrong end.

This does not mean formal learning is worthless. A well-chosen course can provide conceptual scaffolding that makes experiential learning faster. A conference can expose engineers to ideas they would not have encountered otherwise. A book can surface a framing that changes how someone approaches a class of problems. But these work as accelerants to development in the work, not as substitutes for it.

Why Experience Alone Isn't Enough

Experience is necessary but not sufficient. An engineer who has been writing Java microservices for ten years has a lot of experience. That experience is valuable. But if those ten years have been spent doing similar work in similar contexts without reflection, feedback, or deliberate challenge, the engineer has not necessarily developed significantly - they have become very efficient at a narrow set of tasks.

The component that converts experience into capability is reflection with feedback. Doing the work, then stopping to ask: what went well, what didn't, why, what would I do differently? - and getting honest input from someone who can see what you cannot see in yourself. Without this, experience accumulates without generating insight.

This is why psychological safety, blameless retrospectives, and honest feedback cultures are not soft concerns. They are the infrastructure for organisational learning. An organisation where admitting mistakes is risky is an organisation where engineers do not reflect honestly on their experience - and therefore do not learn from it as effectively as they could.


Stretch Assignments

A stretch assignment is a piece of work that is deliberately beyond an engineer's current comfortable capability, scoped and supported in a way that makes growth the expected outcome.

The word "deliberately" is important. Random difficulty is not stretch. An engineer who gets dropped into a production crisis with no support and no debrief afterward has had a hard experience, not a stretch assignment. The stretch assignment is characterised by:

Intent: the manager has chosen this work because it addresses a specific capability gap. The assignment is not accidental - it is purposeful.

Transparency: the engineer knows this is a stretch. They know what capability they are expected to develop. The absence of this transparency is one of the most common failures - engineers are put into challenging situations without understanding why, which makes the learning accidental rather than deliberate.

Scaffolding: there is support available. Not so much support that the challenge is removed - a stretch assignment that is fully managed by the manager is not a stretch for the engineer - but enough that the engineer is not set up to fail.

Debrief: after the assignment, there is a structured conversation about what was learned. Not just "how did it go?" but "what did you do when X happened? What would you do differently? What have you learned about how you work under this kind of pressure?"

Scoping a Stretch Assignment Well

A stretch assignment that is too far beyond current capability is not a stretch - it is a setup for failure. A stretch assignment that is only marginally beyond current capability is not a stretch - it is just a regular piece of work. The useful zone is between these: genuinely challenging, meaningfully supported.

Practical indicators that the scope is right:

  • The engineer feels uncertain about how to approach it (good) but not paralysed (bad)
  • The engineer can articulate what help they would need (good) vs. being unable to identify what they don't know (bad)
  • Failure would be recoverable (good) vs. failure would cause significant organisational harm (bad)

Practical indicators that the scoping is wrong:

  • The manager keeps stepping in to rescue the situation (too hard, or too little scaffolding)
  • The engineer completes the work without encountering any significant challenge (not enough stretch)
  • The engineer cannot connect the experience to a capability they were trying to develop (intent was not communicated clearly)

Types of Stretch Assignments for Engineers

Assignment Type Capability Developed Appropriate For
Leading a significant technical design Systems thinking; communication; influence ISE → SSE transition
Owning a production incident response end-to-end Decision-making under pressure; operational thinking; communication ISE, SSE
Delivering a cross-team technical initiative Stakeholder management; coordination; delivery at scale SSE → LSE transition
Representing the team in senior stakeholder meetings Communication; confidence; organisational navigation SSE, LSE
Mentoring a junior engineer through a complex piece of work Coaching; communication; patience; reflecting on own knowledge ISE, SSE
Leading a team for a sprint while the manager is away People leadership; prioritisation; decision-making SSE considering TTL path
Writing and presenting a technical strategy document Strategic thinking; written communication; synthesis LSE, SSE
Running a department-wide technical review or forum Facilitation; authority; breadth of technical engagement LSE

Sponsorship vs Mentoring vs Coaching

These three words are used interchangeably in most organisations. They are not the same thing. Conflating them leads to engineers getting the wrong kind of support for the situation they are in, and to managers thinking they are developing people when they are not.

Mentoring

Mentoring is a relationship in which a more experienced person shares their experience, perspective, and knowledge with a less experienced person. The mentor offers insight based on what they have been through. The relationship is typically ongoing and relatively informal.

Mentoring is useful when someone needs context about how things work - how to navigate the organisation, what career paths look like in practice, how to think about a career decision. It is not useful when someone needs specific skill development (that is coaching) or when someone needs access to opportunity (that is sponsorship).

A mentor shares experience. They do not direct the mentee's development or solve their problems for them. The risk of mentoring is that it collapses into advice-giving - the mentor telling the mentee what to do, which removes the mentee's agency and does not build their capability to make decisions independently.

Coaching

Coaching is a structured process in which a coach helps someone think more clearly about a problem, a decision, or a development goal. The coach does not share their own experience or give advice. They ask good questions, reflect back what they hear, and help the person access their own thinking more effectively.

Coaching is useful when someone is stuck - when they know they need to change something but cannot work out what or how. It is not useful when someone needs knowledge they do not have (that is training or mentoring) or when someone needs access to opportunity (that is sponsorship).

A coach never tells the coachee what to do. The entire value of coaching comes from the coachee developing their own clarity and their own solutions. Managers who think they are coaching but are actually giving advice (even good advice) are denying their engineers the most valuable part of the process.

Sponsorship

Sponsorship is an active advocacy relationship in which a sponsor uses their credibility, network, and influence to create opportunities for the person they are sponsoring. The sponsor talks about the sponsored person in rooms they are not in. They recommend them for stretch assignments, for promotion, for visibility with senior leaders. They put their own reputation on the line.

Sponsorship is useful at every career level but is disproportionately valuable for engineers from underrepresented groups, who are less likely to have informal sponsorship through social networks. The research is unambiguous: sponsorship has a more significant impact on career progression than mentoring, because it creates opportunity rather than just preparing someone for it.

A sponsor does not wait for the sponsored person to impress them in every situation. They take an active stake in the person's success.

The Matrix

What They Do When It's Useful Common Failure Mode
Mentor Share experience and perspective Navigating career decisions; understanding context Becoming an advice machine; removing engineer agency
Coach Ask questions that develop the person's own thinking When someone is stuck; building self-awareness Giving advice instead of asking questions
Sponsor Create opportunity through advocacy Always, especially for underrepresented engineers Waiting for perfect performance before advocating

Individual Development Plans That Work

Most Individual Development Plans (IDPs) fail because they are activity lists, not outcome specifications. The engineer lists courses they want to do, conferences they want to attend, and books they want to read. The manager signs off. The IDP goes into the HR system and is not referenced again until the next cycle.

An IDP is a plan for capability development, not a wishlist for learning activities. The distinction changes everything about what a good IDP looks like.

What Most IDPs Get Wrong

Activity focus instead of outcome focus. "Complete the AWS Solutions Architect course" is an activity. "Build the capability to lead architectural design decisions for our cloud infrastructure" is an outcome. Activities may or may not lead to the outcome. Specifying the outcome makes it possible to evaluate whether progress is happening and adjust the approach if it is not.

No connection to current work. The most effective development happens in the context of real work. An IDP that describes learning activities entirely disconnected from the engineer's current projects is describing learning in the 10% while ignoring the 70%.

No mechanism for accountability. If the IDP is reviewed once a year, it is not a development plan. It is a document. A working IDP requires check-ins at least quarterly, with honest conversation about what is working and what is not.

Vague timelines. "Develop stronger communication skills over the next year" is not a plan. "By the end of Q3, have led at least two significant technical presentations to non-technical stakeholders and have received and incorporated feedback from each" is a plan.

An Outcome-Focused IDP Template

Development goal (one sentence, specifying the capability to be developed): Lead architectural design decisions for cross-service integrations independently, including producing design documents, facilitating design reviews, and incorporating stakeholder feedback.

Why this matters (connection to current level and next level): This is a key SSE-level capability. Currently operating at ISE - can implement designs when given direction, but not yet consistently driving them. Developing this capability is a prerequisite for SSE promotion.

How we will know it is working (observable evidence): Has led at least two significant design efforts. Design documents have been reviewed and approved without requiring fundamental rework. Can articulate the trade-offs made and defend them under questioning from senior engineers.

Development approach (70/20/10):

  • Work (70%): Lead the design of the authentication service refactor in Q2. Take primary responsibility for the design document, design review facilitation, and incorporation of feedback.
  • Relationships (20%): Monthly check-in with LSE Sarah to review progress and get feedback on design thinking. Peer review on the design document from two SSEs to get calibrated feedback.
  • Formal (10%): Read "Designing Distributed Systems" - focus on chapters 4-7. Watch the Architecture Without Architects talk from GOTO Copenhagen.

Review points: Monthly 1:1 review; mid-point check (end of Q1); outcome assessment (end of Q2).


Learning Pathways

A learning pathway is a structured sequence of development activities mapped to a specific capability gap. It is more specific than a reading list (which is just a collection of resources) and more targeted than a generic development plan (which is an outcome without a route).

A learning pathway answers: if someone is at point A and needs to reach point B in capability X, what is the most effective sequence of activities to get them there?

The sequence matters. Trying to do advanced distributed systems design without solid foundations in concurrency and networking is frustrating and slow. Getting the foundations right first accelerates everything that comes after. A good learning pathway is sequenced so that each step builds on the last.

Mapping Learning to Specific Capability Gaps

The starting point is identifying the gap with precision. Not "improve technical skills" but "build the capability to design systems that handle eventual consistency correctly, including identifying where it creates user-facing problems and knowing how to mitigate them."

With a precise gap, it becomes possible to map a pathway:

Step 1 - Conceptual foundation: Read "Designing Data-Intensive Applications" chapters 7-9. Goal: understand the theoretical basis for consistency, availability, and partition tolerance trade-offs.

Step 2 - Applied exploration: Set up a local experiment with a distributed KV store and deliberately induce partition behaviour. Document what you observe and what surprised you.

Step 3 - Guided practice: Review the existing codebase for places where eventual consistency is not handled correctly. Write up your findings as a short technical brief. Discuss with senior engineer.

Step 4 - Real work application: Take ownership of the next feature that requires designing across service boundaries with asynchronous patterns. Apply the framework from steps 1-3.

Step 5 - Reflection and calibration: After delivery, write a short retrospective on the design decisions made and what you would do differently. Get feedback from a senior engineer.

This is a pathway. A reading list is just step 1.


The Manager's Role

The manager's role in development is not to do the development for engineers. It is to create the conditions in which development happens.

This distinction matters more than it might appear. A manager who does the development for engineers - who steps in when things get hard, who solves the problems the engineer should be working through, who tells people what to do rather than helping them figure it out - is removing the friction that creates learning. They are also creating dependency. Engineers who are developed by being told what to do are not building the judgement and independence they will need at the next level.

The manager's actual job in development:

Identifying and creating stretch opportunities: The manager knows what work is coming, what is on the roadmap, what the team needs. They can allocate work deliberately to create development opportunities rather than simply to get things done efficiently.

Providing honest, specific feedback: Not "good work" and not "you need to communicate better." Specific, timely feedback on observable behaviour, connected to impact. This is a skill and it requires practice. Most managers are not good at it by default.

Making space without removing challenge: Available for support when genuinely stuck, without pre-empting the struggle that creates learning. The right question when an engineer comes to you stuck is usually "what have you tried?" not "here's what you should do."

Connecting development to real work: Explicitly linking the engineer's development goals to their actual work assignments. "I'm giving you this design work because it's the kind of thing that will stretch you in exactly the way we talked about in your development plan" is a sentence managers should be saying regularly.

Following through: Development plans that are reviewed once a year are not being managed. The manager needs to track progress, have regular check-ins, and adjust the plan when it is not working.


Organisational Learning vs Individual Learning

Individual learning and organisational learning are related but distinct. An organisation can employ individually capable engineers while failing to improve as an organisation - because individual capability is siloed, because lessons from failure are not shared, because practices that work in one team do not diffuse to others.

Organisational learning requires deliberate mechanisms.

Communities of Practice

A community of practice is a group of practitioners who share a common interest or discipline and meet regularly to learn from each other. In engineering, this typically means communities around disciplines (backend engineering, data engineering, platform engineering) or practices (testing, architecture, security).

A functioning community of practice does three things: it creates a forum for sharing what people are learning in their day-to-day work; it builds a shared vocabulary and shared standards across teams; and it provides a mechanism for disseminating new practice without requiring top-down mandates.

A non-functioning community of practice is a monthly meeting with a rotating presenter and declining attendance. The signal that a CoP has become bureaucratic rather than useful is when people attend out of obligation rather than because they are getting something from it.

Blameless Culture as Learning Infrastructure

Blameless retrospectives and post-incident reviews are not just good practice for operational reliability. They are learning infrastructure. An organisation where admitting mistakes is risky is an organisation where failures are hidden until they cannot be hidden - at which point the learning opportunity is often lost under the pressure of fixing the immediate problem.

Blameless means that the search is for systemic causes, not individual culprits. It does not mean consequences-free. It means that the question "who made the error?" is less useful than "what conditions made this error possible and likely?" Answering the second question produces learning that prevents future failures. Answering the first question produces defensiveness that prevents learning.

Retrospectives as Learning Mechanisms

Sprint retrospectives, when run well, are a regular mechanism for teams to reflect on their working practices and improve them incrementally. The failure mode is the retrospective that generates a list of actions nobody does anything about. The success mode is the retrospective that generates one or two specific, owned actions that are followed up at the next retro.

The retrospective is not just a delivery improvement tool. It is a learning mechanism. "We had three incidents this sprint involving the same kind of configuration error" is a team-level learning moment. What is the systemic response? A new linting rule, a checklist, a change to the deployment process? The retrospective is where organisational learning becomes concrete action.


Common Failures

Mandatory Training Theatre

Mandatory training that exists to satisfy a compliance requirement rather than to develop capability is not development - it is theatre. Engineers know it. They complete the modules, click through the slides, answer the knowledge-check questions, and retain nothing. The organisation has spent budget and engineer time and produced no capability improvement.

The signal: completion rates are high, but nobody can describe what they learned or how they applied it. The fix: assess what capability the training is supposed to develop, and ask whether it is developing it. If not, find an approach that does.

The Annual Review That Comes Too Late

Development feedback provided once a year in a formal review is almost entirely wasted. By the time the feedback is given, the relevant work is months old. The feedback cannot be applied immediately. There is no mechanism for checking whether it was incorporated. The engineer has had eleven months of operating without accurate feedback.

The fix is not a better annual review process. It is regular, specific, timely feedback in 1:1s and immediately after significant pieces of work. The annual review should contain no surprises - it should be a summary of conversations that have already happened throughout the year.

Development Plans Nobody Revisits

An IDP created at the start of the year and next reviewed at the end of the year is not a development plan. It is a compliance artefact. The development plan should be a living document that is referenced in every relevant 1:1, updated when circumstances change, and assessed honestly at regular intervals.

If an engineer cannot describe their current development plan without looking it up, the plan is not embedded enough to be useful.

Conflating Training Budget with Development Investment

An organisation with a generous training budget but no deliberate approach to stretch assignments, sponsorship, or structured feedback has not made a development investment. It has purchased the illusion of one. Development investment is measured not in pounds spent on courses but in the deliberateness with which the 70% is managed.


Connection to Your Operating Model

Learning and development connects every other element of the career and capability system.

Career pathways define the destination. L&D defines the route. Without a pathway, development conversations have no direction - engineers develop capability without knowing what they are developing toward. Without development infrastructure, the pathway is a map with no roads.

Capability frameworks define the gaps. L&D is the mechanism for closing them. The capability framework tells you what SSE-level delivery capability looks like. The development plan tells you how a specific engineer will build it.

Career conversations are the venue where development is discussed. The manager needs to come to a career conversation with genuine insight into what opportunities exist for development, not just a willingness to talk about it.

Promotion and levelling is the point at which development is validated. The evidence for a promotion case comes from stretch assignments done well, capability developed and demonstrated, and patterns of behaviour that are consistent with the next level. Development is not a separate activity from promotion readiness - it is the mechanism that creates it.