Performance management as practised in most organisations is a ritual that serves the organisation's legal department far better than it serves the people inside it. The annual review cycle - form submission, rating assignment, stack ranking, and a conversation that surprises nobody except the person receiving the rating - has been demonstrated repeatedly to have no meaningful correlation with improved performance. And yet it persists, because the alternative requires managers to have honest conversations throughout the year, which is harder.
This section is about building a performance management approach that actually works. That means being specific about what performance looks like in an engineering context, building the rhythms and habits that make continuous improvement possible, and equipping managers to have the conversations that matter - before they become formal processes.
Why Annual Reviews Fail
The annual review fails for predictable, structural reasons. Understanding why it fails is essential to replacing it effectively, because the temptation is to fix the form rather than the system.
The Recency Effect
Human memory is not an accurate recorder of twelve months of performance. Managers conducting an annual review are primarily evaluating the last six to eight weeks of work. If the preceding ten months included excellent delivery that has since been overshadowed by a difficult project, or a period of personal difficulty, or simply the accumulation of invisible background work that has no obvious moment of visibility - all of that effectively disappears.
The recency effect is not a manager failing; it is a property of how memory works. The structural fix is not to try harder to remember, but to record regularly. A manager who has twelve months of written check-in notes is in a fundamentally different position than one who has not.
The Compression Effect
Most rating scales used in annual reviews produce a bell curve through managerial instinct rather than calibrated evidence. Managers anchor to the middle. "Meets expectations" becomes the default because:
- Rating someone as "exceeds" requires justification and sets a precedent
- Rating someone as "below expectations" requires documentation and initiates a process
- "Meets expectations" is safe, defensible, and avoids difficult conversations
The result is that most people receive ratings that tell them nothing. They neither understand what they did well nor what they need to do differently. The rating becomes a number that determines a pay uplift percentage, which is the only reason most people care about it at all.
The Compensation Entanglement Problem
When performance ratings directly determine salary increases, the conversation cannot be honest. A manager trying to give developmental feedback while simultaneously delivering a rating that determines compensation is trying to do two incompatible things at once. The employee is doing the maths on their pay rise while nodding at the feedback. Neither person is fully present for the conversation that would actually be useful.
Some organisations separate these conversations by six months. This helps. The more fundamental fix is to build a performance culture where compensation decisions are informed by an ongoing record of contribution, not by a single conversation at year-end.
Too Late to Be Useful
Annual reviews are retrospective by design. If a delivery problem, a quality issue, or a behavioural pattern has been allowed to persist for eleven months without being named, a review conversation will not fix it. The feedback arrives after the moment where it could have changed behaviour. By the time it is documented in an annual review, it has often already affected the team, the product, or the individual's own career trajectory in ways that cannot be undone by the conversation.
The appropriate response to a performance issue is to name it when it is visible, not to bank it for an annual conversation.
What Continuous Performance Actually Means
Continuous performance management is frequently misunderstood as meaning more reviews. It does not mean more reviews. It means that performance conversations are embedded in the rhythm of work, rather than being an event that arrives once a year and is immediately forgotten.
The mechanisms that make continuous performance real are not complicated:
Regular 1:1s with a performance dimension. A weekly or fortnightly 1:1 that is genuinely used to discuss progress, blockers, quality of work, and development is the single most effective performance management tool available to a manager. The conversation is brief, informal, and bidirectional. It does not require a form. It does require that the manager is genuinely paying attention to what the person is working on and how.
Project retrospectives that include individual reflection. The end of a significant delivery is a natural moment to discuss what the person contributed, what they found difficult, what they would do differently, and what capabilities they want to develop next. This is not a performance review. It is a development conversation that happens to also generate useful information about performance.
Documented check-ins. Brief written notes - what was discussed, what commitments were made, what follow-up is needed - are the raw material from which any formal assessment is built. Without them, the formal review is a fiction. With them, it is a synthesis.
Explicit expectation-setting at the start of work. Continuous performance management requires knowing what "good" looks like before the work begins. This means discussing objectives, success criteria, and how progress will be visible - at the start of a project or quarter, not at the end.
The rhythm might look like this:
| Cadence | Conversation Type | Owner |
|---|---|---|
| Weekly | 1:1 - progress, blockers, wellbeing | Manager leads, engineer drives agenda |
| Monthly | Development check-in - growth, feedback, goals | Manager leads |
| Quarterly | Performance reflection - contribution, impact, development | Joint |
| Annually | Formal review - evidence synthesis, compensation, career | Manager, with HR |
| At project end | Retrospective contribution review | Team and manager |
None of these conversations require a form. All of them should be documented in some way.
Managing Performance vs Managing People
This is the distinction that most management training gets wrong by omission. The conflation of managing performance (improving outcomes) with managing people (attending to the whole person) produces managers who are either too soft to have difficult conversations or too hard to build the relationships that make difficult conversations possible.
Managing Performance
Managing performance means being clear about what good looks like, tracking progress against it, identifying when performance is falling short of what is needed, naming that clearly and early, providing specific and actionable feedback, and supporting the person to improve. It is outcome-focused. It is honest. It does not require the manager to like or dislike the person.
Managing People
Managing people means understanding that the person producing the performance is a human being with a life, history, motivations, fears, and development needs. It means knowing what drives them, what they find difficult, what their career aspirations are, and what is happening outside work that might be affecting what is happening inside it. It is relationship-focused. It requires genuine interest.
The confusion happens when:
- A manager who manages performance but not people becomes someone who delivers blunt assessments without context or support, damaging relationships and losing good people who do not feel seen
- A manager who manages people but not performance becomes someone who cannot name underperformance, who avoids difficult conversations because they value the relationship, and who allows problems to compound until they become serious - at which point the relationship is damaged anyway
The effective manager does both. They maintain high standards and name clearly when those standards are not being met. They also understand the person well enough to know whether the performance issue is a capability gap, a motivation issue, a personal circumstance, a poor role fit, or a system problem. That understanding changes the intervention.
What High Performance Looks Like in Engineering
One of the reasons performance conversations in engineering are difficult is that the vocabulary of performance is often wrong. Activity is mistaken for output. Output is mistaken for outcome. Effort is mistaken for impact.
Outcomes vs Activity
Activity is commits, tickets closed, meetings attended, velocity points delivered. These are visible and easy to count. They are also poor proxies for performance, because they can be gamed, they do not account for quality, and they say nothing about whether the work mattered.
Outcomes are delivered value - features used, bugs prevented, build times reduced, customer problems solved, team capability increased. These are harder to measure but are the actual point of engineering work. A manager assessing performance should be asking "what happened because of this person's work?" not "how many things did this person do?"
Concrete Signals of High Performance
The following are observable, specific indicators of high performance in an engineering context:
Delivery quality. Work arrives ready for review. Pull requests are well-described, self-reviewed, and require minimal back-and-forth. Production incidents attributable to their work are rare and, when they occur, are handled with care and transparency.
Technical judgement. They know when to build and when to not build. They push back on unnecessary complexity. They identify risk before it becomes a problem. Their architectural opinions are considered and clearly reasoned.
Team multiplier effect. High performers in engineering frequently make the people around them faster, not just themselves. They review other people's work thoughtfully. They document decisions. They unblock colleagues. They transfer knowledge.
Proactive communication. They surface problems early. They are not found to have known something important and not said it. They communicate progress without being asked.
Adaptability. When priorities change, they adapt without drama. When they encounter something unfamiliar, they learn it rather than avoiding it.
What Busyness Looks Like (and Why It Is Not Performance)
| Looks Like Performance | Is Not Performance |
|---|---|
| Closing many tickets | Tickets were small, trivial, or self-generated |
| High commit frequency | Commits are small fixes to earlier broken work |
| Always in meetings | Meetings produce no decisions or actions |
| Responding quickly to everything | Responsiveness is substituting for deep work |
| Long hours | Long hours signal poor estimation or poor boundaries, not exceptional output |
The Performance Conversation Cadence
Weekly 1:1s
The weekly 1:1 is the manager's most important tool. It should not be a status update - that is what standup or asynchronous updates are for. It should cover:
- What is the person working on and is it going well?
- What is getting in the way?
- What do they need from the manager?
- What is one thing from the previous week worth noting - positive or developmental?
The manager's job in a 1:1 is mostly to listen. A 1:1 where the manager talks for more than a third of the time is not a 1:1, it is a briefing.
What good looks like: The engineer drives the agenda. The conversation is honest in both directions. Both participants leave with clear next actions.
What bad looks like: The meeting is cancelled more than twice in a row without rescheduling. The manager does most of the talking. The agenda is entirely project status. No feedback is exchanged.
Monthly Development Conversations
Once a month, the 1:1 should include an explicit development dimension:
- What have you learned this month?
- What do you want to get better at?
- What would help you do that?
- Are you getting the feedback you need?
This does not need to be a separate meeting. It is ten minutes within an existing 1:1, structured explicitly. It generates the raw material for any PDP review, and it signals to the engineer that their development is taken seriously.
Quarterly Performance Reflections
Four times a year, the manager and engineer should have a more structured conversation about performance. This is not a review - it is a check-in against the objectives and expectations set at the start of the quarter. It should cover:
- What were the agreed objectives and how have we done against them?
- What has gone well and why?
- What has not gone well and what do we learn from that?
- What do we want to focus on next quarter?
This conversation should take about 45 minutes. It should be documented. It replaces the need for the annual review to contain any surprises.
The Annual Review
If all of the above is happening, the annual review is a synthesis, not a revelation. It should cover:
- A summary of the year's contribution
- Formal rating or assessment (if required by the organisation)
- Compensation discussion (if tied to this cycle)
- Career development discussion - where does the person want to go over the next 12-24 months?
The annual review should contain nothing that the person has not already heard in some form. If it does, the cadence above has not been working as intended.
Connecting Performance to Development
Performance data, when collected well, is the most useful development input available. A quarterly review that identifies a pattern of strong technical delivery but weak written communication is pointing directly at a development priority. A check-in that surfaces a recurring pattern of underestimation is identifying a skill gap.
The connection between performance observation and development action requires:
Naming the pattern, not just the instance. "This PR was poorly described" is feedback on an instance. "I've noticed that your written communication tends to be sparse, and I think it's affecting how your work lands with reviewers and stakeholders" is identification of a development need.
Connecting to development resources. When a gap is identified, the manager's job is to help the person find a path to closing it - not to simply note it exists. That might mean pairing with someone, taking on a specific type of work, getting a book or course, or getting more structured feedback on that specific skill.
Reviewing development progress. If a development goal is set in February and never mentioned again until the December review, it was not a development goal, it was a note. Development commitments need to appear in the monthly development conversations.
Using the capability framework as a reference. If your organisation has a capability framework or career ladder, it should be the explicit reference point for performance and development conversations. The engineer should know what the expectations are at their level and what the expectations look like at the next level. That clarity makes development conversations concrete.
Common Failures
Forced Ranking
Stack ranking - the practice of requiring managers to distribute ratings across a forced curve - destroys trust, incentivises competition over collaboration, and produces ratings that reflect the distribution requirement rather than actual performance. It is theoretically possible to have a team where everyone is performing exceptionally well. Forced ranking cannot reflect that reality. Organisations that use forced ranking are making a choice to value administrative simplicity over accurate assessment.
Recency Bias
The engineer who had a poor Q3 but an excellent Q4 should not receive an "exceeds expectations" rating based on Q4 alone. The engineer who had three excellent quarters but a visible stumble in Q4 should not receive a "needs improvement" rating. Recency bias is the default in the absence of written records. Written records across the year are the only structural fix.
The "Meeting Expectations" Trap
In organisations where ratings are used to determine compensation, "meeting expectations" is often interpreted as "getting the minimum pay rise." This creates resentment in people who are performing well but receive a rating that signals adequacy rather than strong contribution. If the rating system is going to be used, the language around what each level means - including what a "meeting expectations" rating represents in terms of genuine performance quality - needs to be explicit and consistent.
Ratings That Surprise the Recipient
If an engineer receives a "below expectations" or "needs improvement" rating and is surprised by it, the performance management system has failed. The rating may be accurate - the performance may genuinely have been below expectations. But if the person is hearing that for the first time in the formal review, the conversations that should have happened throughout the year did not happen. The formal rating process is not the place to deliver news that should have been delivered months earlier.
The Manager Who Confuses Niceness With Kindness
Avoiding difficult conversations, softening feedback until it loses meaning, giving everyone "meets expectations" regardless of actual contribution - none of this is kind. It withholds information people need to improve, it rewards mediocre performance at the same rate as excellent performance, and it means that when a formal process does eventually happen, it is more severe than it would have been with earlier intervention.
Connection to Your Operating Model
Performance management does not exist in isolation. The effectiveness of the conversations described here depends on:
Clarity of role expectations. If engineers do not know what is expected of them at their level, performance conversations will be vague and unsatisfying. The capability framework and role archetypes provide the vocabulary.
A functioning feedback culture. Performance conversations are built on feedback. If feedback is not flowing regularly in both directions, performance conversations will be awkward and one-sided.
Manager capability. The conversation cadence described here requires managers who are equipped to have honest, developmental, specific conversations. That is a skill. It should be developed explicitly, not assumed.
Psychological safety. Performance conversations are only useful if they are honest. Honesty requires safety. If engineers do not feel safe to surface struggles, admit mistakes, or disagree with their manager's assessment, the performance conversation becomes a performance of a different kind.
HR processes that support rather than constrain. The formal HR process - ratings, documentation, PIPs - should be a backstop, not the primary mechanism. When HR processes become the primary mechanism, it signals that the informal systems have broken down.
The Manager Who Has Never Had Training
A substantial proportion of engineering managers were promoted because they were excellent engineers. They were given a team, a title, and perhaps a day of management training. Performance conversations were never covered in any depth, feedback frameworks were never taught, and the expectation that they would figure it out through experience is the reason so many engineers are managed by people who avoid difficult conversations.
This is not a criticism of those managers. It is a description of an organisational failure. If your organisation has not explicitly trained managers in performance conversation skills, it has no right to be surprised when those conversations do not happen or happen poorly.
The fix is not complex: train managers in the specific skills they need - feedback frameworks, conversation structure, documentation practice - and support them through the first few difficult conversations with coaching rather than criticism. A manager who has never been shown how to name underperformance clearly should not be expected to do it by instinct.
The performance management system described here is not a product that can be purchased or a template that can be dropped into an organisation. It is a set of habits, conversations, and norms that have to be built over time. The annual review is easy to replace with a form. What it takes longer to replace is the culture of avoidance that the annual review has enabled. That requires sustained commitment from leadership, investment in manager capability, and patience.
This section connects directly to: Feedback Systems, PDPs and PIPs, Talent Reviews and Calibration, Addressing Underperformance, the Role Archetypes and Career and Capability frameworks.
Implementing the Performance Management System
Moving from an annual review model to the continuous model described here is a change management challenge as much as a process challenge. The following implementation guidance addresses the common obstacles and failure points.
Starting Point: Assess the Current State
Before redesigning performance management, understand what is actually happening:
- Are 1:1s happening regularly? Are they being cancelled frequently?
- Do engineers know what is expected of them at their level?
- How much notice do people get before formal performance processes begin?
- What do exit interviews say about performance management?
- What is the manager's own view of their capability to have difficult performance conversations?
The answers to these questions determine where the implementation effort is most needed. An organisation where 1:1s are not happening needs a different intervention than one where 1:1s happen but lack substance.
The Manager Development Requirement
No performance management redesign will succeed if managers are not equipped to have the conversations it requires. The investment in manager capability is not optional. It should include:
Training in feedback frameworks. The SBI model and its application. Worked examples. Practice with a coach or peer.
Role-playing difficult conversations. The early performance conversation, the underperformance conversation, the "you are not being promoted" conversation - these should be practised before they are needed.
Peer manager groups. Managers discussing real situations (appropriately anonymised) with peers, with a facilitator who can help them identify better approaches.
HR as a development partner, not just a process enforcer. HR business partners should be accessible to managers as advisors on performance conversations before they become formal processes, not only after.
Phasing the Change
Attempting to change the entire performance management system simultaneously is a high-risk approach. A phased model:
Phase 1 (months 1-3): Build the rhythm. Focus exclusively on getting regular 1:1s happening with a genuine agenda. Provide a template. Track compliance. Address non-compliance directly.
Phase 2 (months 3-6): Add substance. Introduce the expectation that development and feedback are standing 1:1 agenda items. Provide manager training on feedback frameworks.
Phase 3 (months 6-12): Add structure. Introduce quarterly performance reflections. Define what these look like. Train managers to run them.
Phase 4 (year 2): Calibrate and connect. Ensure talent reviews are using the consistent data generated by the first year of improved 1:1s and quarterly reflections. Begin connecting performance data to development planning systematically.
This phasing is slower than most organisations want. The pressure to show results quickly pushes organisations back toward launching new forms and processes rather than building new behaviours. New forms without new behaviours will produce the same outcomes as the old forms.
Measuring the Right Things
How do you know whether the performance management system is improving? The wrong measures:
- Percentage of forms submitted on time
- Average performance rating distribution
- Number of PIPs initiated
The right measures:
- Percentage of engineers who received developmental feedback in the past 30 days (survey)
- Percentage of engineers who can state their top development priority (survey)
- Average time between first performance concern being observable and first conversation with the engineer about it (requires retrospective case review)
- Retention of high performers compared to previous period
- Manager capability assessment scores (peer and report feedback on manager performance)
These measures are harder to collect and do not produce tidy dashboards. They are also the measures that track the outcomes that actually matter.
Specific Contexts
Performance Management in Remote and Hybrid Teams
The rhythms described in this section are harder to maintain in remote and hybrid environments. The absence of physical proximity removes the incidental check-ins, the visual cues about how someone is doing, and the natural rhythm of shared work. In remote and hybrid teams:
1:1s become more, not less, critical. The absence of physical presence makes the scheduled 1:1 the primary point of personal contact. They should be protected, not cancelled. The camera-on norm is worth enforcing for 1:1s even if it is optional for other meetings - the visual feedback matters for relationship quality.
Asynchronous documentation becomes more important. The manager who documents 1:1 notes, check-in observations, and feedback in writing is building a record that is particularly important when there is no ambient physical awareness of how someone is doing.
Recognition requires deliberate effort. In a shared office, recognition can happen informally in passing. In a remote environment, it requires deliberate action - writing it in a channel, saying it in a meeting, sending a direct message. The activation energy is higher, so the practice needs to be more intentional.
Underperformance signals are harder to read. The withdrawal, the energy change, the reduced contribution in meetings - these are more difficult to notice when you are not in the same physical space. Managers in remote environments need to be more alert to behavioural signals and more willing to ask directly: "How are you finding the work at the moment?"
Performance Management in High-Growth Environments
In organisations growing rapidly, the performance management system is often the first thing to break down. The reasons are predictable: managers are hired without time to develop them, role expectations are unclear because roles are evolving, the culture is focused on growth speed rather than people practice quality.
The minimum viable performance management system for a high-growth environment:
- Regular 1:1s happen, with documented notes
- Every engineer knows what is expected of them in their current role
- When performance concerns arise, they are named within two weeks of becoming visible
- Progression criteria are explicit, even if rough
- HR involvement in any formal process is non-negotiable
This is not the full system described in this section. It is the floor below which the organisation is flying blind on its most important investment.
Performance Management After a Reorganisation
Reorganisations disrupt performance management in predictable ways: engineers move to new managers who have no history with them, previous performance context is lost, and the new manager inherits situations without the record that would allow them to understand what has happened previously.
When taking on a new team, a manager should:
- Review any available documentation on previous performance conversations
- Hold brief individual conversations with each engineer to understand their own perspective on their performance and development
- Not form strong performance opinions for at least eight weeks, giving themselves time to observe across a meaningful sample
- Be explicit about any performance concerns they are inheriting, and ensure the previous manager's context has been properly handed over
The engineer should not be disadvantaged by a manager transition. The new manager's lack of context is not the engineer's problem - it is a management continuity challenge that the organisation needs to address through proper handover processes.
Reference: Performance Conversation Quality Checklist
Use this checklist to assess whether a performance conversation - 1:1, quarterly reflection, or annual review - has met a minimum quality threshold:
Before the conversation:
- Both parties know the agenda in advance
- The manager has reviewed any notes from previous conversations
- There is enough time protected for the conversation (not squeezed into 10 minutes)
During the conversation:
- Specific examples are used, not generalisations
- Both parties contribute - this is not a monologue
- Feedback flows in both directions (manager receives feedback as well as gives it)
- Development goals are explicitly discussed, not just performance
- Any performance concerns are named clearly, not softened into irrelevance
After the conversation:
- A brief note is documented by the manager
- Any agreed actions are written down with owners and timelines
- A follow-up date is confirmed
- The engineer leaves knowing what was said, not just how the conversation felt
A conversation that fails three or more of these criteria should be treated as incomplete and followed up with a written summary to fill the gaps.
Reference: Performance Rating Definitions
If your organisation uses performance ratings, the following definitions provide a starting point for calibration. They should be reviewed against your capability framework and adjusted to reflect your specific context.
| Rating | Definition | What It Is Not |
|---|---|---|
| Exceptional | Consistently performing at the expectations of the level above. Demonstrable impact beyond immediate role. Raises the bar for the team. | Anyone who is not actively failing |
| Exceeds Expectations | Consistently performing above the core expectations of the current level. Delivering outcomes beyond what was required. | The default for people who are liked |
| Meets Expectations | Consistently meeting the core expectations of the current level. Solid, reliable contribution. This is a positive rating. | A consolation for people who are not exceptional |
| Developing | Not yet consistently meeting the core expectations of the level. May be new to the role, working through a specific gap, or in a period of supported improvement. | Only for underperformers |
| Below Expectations | Significantly and persistently below the core expectations of the level, despite support and clear feedback. Formal process is typically appropriate. | A surprise to the recipient |
The key principle: "Meets Expectations" should never be an insult. If the expectation is set correctly, meeting it consistently is a strong performance. The culture that treats "Meets Expectations" as a negative rating has, by implication, set the real expectation at "Exceeds Expectations" - which is an unsustainable standard that demoralises the majority while meaninglessly rewarding the minority.