• Home
  • BVSSH
  • C4E
  • Playbooks
  • Frameworks
  • Good Reads
Search

What are you looking for?

Policy : Make AI Work Sustainable for the People Who Build It

Commitment to Sustainable AI Practice The people who build and operate AI systems are doing some of the most cognitively demanding and ethically consequential work in the modern organisation. They navigate ambiguous problem spaces, make high-stakes judgements under uncertainty, carry responsibility for systems that affect many people, and operate in a field where the technology, regulation, and expectations change faster than any individual can comfortably track. We ask a great deal of these people. Our commitment is to ensure that what we ask is sustainable — that the working conditions, team culture, and organisational support provided to AI practitioners enable them to do excellent work over the long term, without the burnout, attrition, and accumulated technical debt that unsustainable working patterns produce.

What This Means Sustainable AI work means managing cognitive load through team structure, tooling, and process. It means creating psychological safety so that practitioners can raise concerns, challenge decisions, and acknowledge uncertainty without personal risk. It means providing the training, mentoring, and career support that the rapid pace of AI development makes continuously necessary. And it means recognising that the people building AI systems are themselves affected by the culture and conditions in which they work — and that their wellbeing and the quality of their output are directly connected.

Our commitment to making AI work sustainable is built on:

  • Cognitive Load Management – AI teams are sized and structured to manage the cognitive demands of the work. Team members are not simultaneously responsible for more concurrent high-complexity initiatives than they can maintain adequate attention on. Cognitive overload is treated as an architectural risk, not a personnel management issue.
  • Psychological Safety – AI practitioners work in environments where raising concerns — about model quality, ethical implications, technical risk, or process gaps — is safe and expected. Cultures where engineers feel unable to voice doubts about the systems they are building produce worse AI and worse outcomes for the people those systems affect.
  • Sustainable Pacing – Delivery timelines for AI systems are set to be achievable without sustained overwork. Crunch periods are exceptional and bounded — not a continuous state. Leaders are accountable for protecting their teams from the organisational pressure that converts ambitious timelines into harmful working patterns.
  • Continuous Learning Investment – AI is a field where relevant knowledge advances rapidly. We invest in ongoing learning for AI practitioners: time for self-directed study, access to training, conference participation, and community engagement. Practitioners who cannot keep pace with the field they work in accumulate an anxiety that sustainability requires addressing.
  • Career Path Clarity – AI practitioners have clear, supported career paths within the organisation. Ambiguity about progression, value, and future opportunity is a source of anxiety that drives attrition. We invest in clear role definitions, honest performance feedback, and visible progression opportunities.
  • Ethical Support Structures – AI practitioners frequently encounter ethically complex situations: tension between business objectives and responsible AI practice, decisions about acceptable bias thresholds, questions about dual-use risk. We provide ethical support structures — access to ethics review, named ethics contacts, and forums for deliberation — so practitioners are not making these judgements alone.
  • Attrition and Wellbeing Monitoring – We actively monitor team health indicators for AI teams: attrition rates, engagement survey results, sick leave patterns, and qualitative feedback from retrospectives. Leading indicators of unsustainable conditions are acted upon before they produce burnout and departures.

Why This Matters The quality of AI systems is inseparable from the quality of the conditions in which they are built. Practitioners working under unsustainable pressure make more errors, take more technical shortcuts, document less thoroughly, and eventually leave — taking the institutional knowledge needed to maintain and improve the systems they built with them. The human capital concentrated in AI teams is expensive to build and easy to destroy. Protecting the conditions that allow AI practitioners to do excellent work is not just good people management — it is good AI governance.

Our Expectation AI team working conditions are actively managed and periodically reviewed. Leaders are accountable for creating environments in which AI practitioners can do excellent, sustainable work. Teams that are consistently overloaded, burning out, or losing practitioners are not being productive — they are consuming capital that cannot be easily replaced. Making AI work sustainable for the people who build it is how we ensure AI practice that makes practitioners genuinely Happier and better at the consequential work they do.

Associated Standards

Technical debt is like junk food - easy now, painful later.

Awesome Blogs
  • LinkedIn Engineering
  • Github Engineering
  • Uber Engineering
  • Code as Craft
  • Medium.engineering