Intermediate Data Engineer – Growth Tracker

[ Name ] Intermediate Data Engineer – Growth Tracker

IMDE  ·  SFIA 3-4  ·  raganmcgill.co.uk

1Novice
No evidence of this yet · Lacks experience in this competency · Requires significant training and guidance
2Developing
Evidence of trying but lacking consistency · Demonstrates effort and initial attempts · Progressing, consistency is needed
3Proficient
Evidence of doing this with areas for improvement · Competent with some areas for enhancement · Meets most expectations
4Accomplished
Evidence of consistently meeting expectations · Highly reliable in delivering results · Maintains performance standards
5Expert
Evidence of exceeding expectations · Demonstrates exceptional mastery · Autonomous · Leads and mentors others
Learning & Growth
Delivery
Quality & Craft
Communication
Collaboration
Ownership
Technical Foundation
Learning & Growth
Actively develops depth in data engineering by studying advanced SQL patterns, distributed compute, and data architecture approaches beyond the immediate needs of current work.
Engages with the broader data engineering community through conferences, publications, and open source, bringing relevant insights back to the team.
Reflects on their own technical decisions after delivery, considering what worked well, what they would do differently, and what they would share with others.
Identifies the next level of technical challenge they need to take on and actively pursues it with their TTL or manager.
Develops knowledge of adjacent domains such as analytics engineering, data governance, and platform engineering to collaborate more effectively across team boundaries.
Seeks out feedback on their technical decisions from senior engineers, not just confirmation that their approach is acceptable.
Delivery
Delivers moderately complex data pipeline work independently, managing their own scope, estimating accurately, and flagging risk early.
Maintains delivery momentum while juggling mentoring responsibilities, managing their time deliberately to do both well.
Breaks down large pipeline tasks into reviewable increments and delivers them progressively rather than in large single PRs.
Contributes meaningfully to sprint planning by providing well-reasoned estimates with explicit assumptions and flagging dependencies.
Drives tasks to completion including post-deployment verification, monitoring setup, and documentation, not just code merged.
Identifies and manages delivery risk proactively, flagging to TTL when scope is larger than initially understood.
Quality & Craft
Sets a visible quality standard for the team through their own work so others can understand what good looks like.
Writes data quality tests that are meaningful and catch real failure modes rather than providing cosmetic coverage.
Refactors proactively within their own delivery, improving existing code when passing through it rather than accumulating debt.
Performs thorough self-review before requesting code review, checking logic, edge cases, performance, and documentation.
Writes code that can be maintained by engineers other than themselves, designing for readability and long-term maintainability.
Identifies systemic quality issues in the codebase and proposes structured improvements rather than one-off fixes.
Champions good data modelling discipline through appropriate normalisation, clear naming conventions, and documented business rules.
Communication
Communicates technical decisions clearly, explaining not just what was decided but why, and what alternatives were considered.
Writes thorough PR descriptions that include context, testing evidence, and guidance for reviewers.
Provides code review feedback that is specific, actionable, and educational, helping junior engineers understand the reasoning.
Surfaces data quality risks and platform concerns to senior engineers and the TTL with clear evidence and impact assessment.
Communicates effectively with data analysts and stakeholders, translating technical concepts into terms relevant to their audience.
Documents important decisions, data model rationale, and pipeline design choices in accessible places for future engineers.
Collaboration
Builds strong working relationships with data analysts, understanding how data is consumed and what quality guarantees matter most.
Collaborates actively with platform engineers on the infrastructure and tooling that underpins the data platform.
Contributes substantively to technical discussions, sharing well-reasoned opinions while remaining genuinely open to better ideas.
Invests time in mentoring junior engineers as a core part of the role, not an optional extra.
Works across team boundaries where data flows connect multiple teams, building trust and establishing clear ownership.
Facilitates knowledge sharing through short technical talks, internal guides, and capturing learnings from incidents.
Ownership
Takes full ownership of the data pipelines and domains assigned to them, understanding them deeply and maintaining them proactively.
Responds to data quality incidents with urgency, investigating, communicating, and resolving with minimal escalation needed.
Advocates for the health of the data platform by raising concerns about technical debt, fragile pipelines, and risks before they become incidents.
Follows through completely on delivery commitments including monitoring setup, documentation, and knowledge transfer.
Takes responsibility for the quality of junior engineers output when supporting them, owning the mentoring relationship and not just the advice.
Acknowledges and learns openly from technical mistakes, sharing root cause analysis with the wider team where appropriate.
Technical Foundation
Demonstrates advanced SQL and Python capability applied consistently in production-quality pipeline work.
Designs data models with appropriate rigour, applying dimensional modelling or lakehouse patterns with clear documented reasoning.
Builds and operates orchestration pipelines that are observable, recoverable, and maintainable by others.
Implements data quality frameworks that provide genuine confidence in data reliability for downstream consumers.
Understands query performance at sufficient depth to diagnose and resolve warehouse-level performance problems.
Maintains awareness of the broader data platform architecture and how their work fits within it.
Keeps up with evolution in the team's tooling and platform, adapting practices as tools and patterns mature.
Evidence & examples
Evidence & examples
Evidence & examples
Evidence & examples
Evidence & examples
Evidence & examples
Evidence & examples

Strengths to recognise

Development focus areas

Overall assessment & agreed actions