• Home
  • BVSSH
  • C4E
  • Playbooks
  • Frameworks
  • Good Reads
Search

What are you looking for?

Practice : Cross-Functional AI Team Design

Purpose and Strategic Importance

AI systems that work in production require more than technical excellence in machine learning. They require understanding of the business domain the model serves, user research capability to validate that the system meets real user needs, engineering depth to build reliable data pipelines and serving infrastructure, and design skill to create interfaces through which users interact with AI outputs. Teams composed only of data scientists — or of data scientists and engineers without domain, user, or design expertise — consistently produce AI systems that are technically capable but practically ineffective.

Cross-functional team design is also a risk management practice. Teams that bring diverse perspectives to AI work are more likely to identify ethical risks, fairness concerns, and unintended consequences than technically homogeneous teams. Diversity of background and expertise is not a soft nicety in AI team design — it is a structural defence against the narrow thinking that produces AI systems with serious but avoidable harms.


Description of the Practice

  • Designs AI teams with deliberate representation across the disciplines needed for the full AI lifecycle: data engineering, data science, ML engineering, product management, UX/design, and domain expertise.
  • Defines clear roles and accountabilities within the team so that each function understands its contribution to the shared mission and the interfaces between roles.
  • Co-locates or creates strong collaboration structures between data and software engineering capabilities, preventing the siloed handoffs that are the most common cause of MLOps dysfunction.
  • Establishes team topologies that give AI teams sufficient autonomy to make technical decisions quickly while maintaining the accountability structures needed for responsible AI governance.
  • Revisits team design as AI systems scale and team needs evolve, recognising that the team structure appropriate for exploration and prototyping differs from the one appropriate for production operation.

How to Practise It (Playbook)

1. Getting Started

  • Map the capabilities currently represented in your AI team against the full set needed for production AI delivery, identifying gaps that are creating bottlenecks or quality issues.
  • Prioritise filling the gaps that are causing the most immediate harm to delivery and quality — often data engineering depth or domain expertise — before attempting to fill every gap simultaneously.
  • Define role descriptions for each function in the AI team that are specific to the AI context, not generic job descriptions, enabling clearer hiring and clearer team member expectations.
  • Establish a team charter that articulates the team's mission, ways of working, decision-making processes, and the interfaces with other teams and governance functions.

2. Scaling and Maturing

  • Develop career pathways for all roles in the AI team that recognise and reward both technical depth and cross-functional collaboration, preventing the formation of siloed sub-team identities.
  • Build a rotation or pairing programme that enables team members to develop understanding across adjacent functions — data scientists spending time with data engineers, ML engineers pairing with domain specialists — building collective fluency across the team.
  • Create an AI community of practice that connects cross-functional AI team members across different product areas or domains, building shared standards and preventing duplication of effort.
  • Review team topology decisions regularly against delivery outcomes and team health indicators, being willing to restructure when the current design is producing friction or quality problems.

3. Team Behaviours to Encourage

  • Treat every team member's perspective as valuable in design and review discussions — the domain specialist's question "but will users actually trust this?" is as important as the data scientist's question "what's the AUC?".
  • Build shared responsibility for AI system outcomes across the whole team — not "the model is the data scientists' problem" and "the infrastructure is the engineers' problem" but "the AI system works, and it's ours together".
  • Invest in building shared language across functions — data scientists learning to communicate with product managers, engineers learning enough ML to engage meaningfully with model design — without requiring everyone to become a generalist.
  • Celebrate team achievements, not individual ones — in AI work, success is always the product of multiple functions working well together, and reward structures should reflect this.

4. Watch Out For…

  • Teams that are cross-functional on paper but siloed in practice — where each function attends the same standups but works independently and hands off rather than collaborating continuously.
  • Imbalanced teams where one function (often data science) dominates decision-making while other functions are treated as execution resources rather than full contributors to design and direction.
  • Team designs that do not include user research or domain expertise, producing technically capable AI systems that solve the wrong problem or solve the right problem in a way users cannot use.
  • Scaling teams by adding more data scientists when the constraint is actually data engineering or ML operations capacity — diagnosing the real bottleneck before adding headcount.

5. Signals of Success

  • AI systems produced by the team demonstrate excellence across all dimensions — technical quality, user experience, business impact, and responsible AI compliance — not just in the functions most represented on the team.
  • Team members from all functions report feeling that their contributions are valued and that they have meaningful influence on product and technical decisions.
  • Cross-functional collaboration is visible in artefacts — designs co-created by data scientists and UX designers, data pipelines co-designed by data engineers and domain specialists — not just described in process documentation.
  • The team is achieving meaningful delivery outcomes and demonstrating the AI work in production — shipping models that serve real users — not just doing work in perpetual exploration.
  • Leadership can describe the team's composition, roles, and accountabilities clearly, reflecting that team design is understood and supported at an organisational level.
Associated Standards
  • AI teams operate with clear ownership and psychological safety
  • AI work is recognised and celebrated as a team achievement
  • AI tooling is selected with developer experience as a primary criterion

Technical debt is like junk food - easy now, painful later.

Awesome Blogs
  • LinkedIn Engineering
  • Github Engineering
  • Uber Engineering
  • Code as Craft
  • Medium.engineering