• Home
  • BVSSH
  • C4E
  • Playbooks
  • Frameworks
  • Good Reads
Search

What are you looking for?

Standard : AI Knowledge Sharing Frequency

Description

AI Knowledge Sharing Frequency measures how often AI teams share learnings, experiment results, technique discoveries, failure analyses, and working demonstrations with the broader organisation — across formats including team showcases, written retrospectives, community of practice sessions, internal blog posts, and cross-team demos. It captures the organisational learning rate from AI work, not just the technical output.

AI knowledge is particularly prone to siloing. Experiment results that do not produce a deployed model are often never shared. Failure analyses are rarely written up. Techniques that one team finds useful take months to propagate across the organisation. This measure creates an expectation that AI knowledge is systematically shared, not just accumulated within individual teams — building institutional capability, avoiding duplicated effort, and creating the shared understanding that enables coordinated AI governance and strategy.

How to Use

What to Measure

  • Number of AI knowledge sharing events per team per month (showcases, demos, community of practice sessions, lunch-and-learns)
  • Volume of written knowledge artefacts published (experiment retrospectives, model cards, technical blog posts, runbooks) per team per quarter
  • Cross-team reach: number of teams or individuals outside the immediate AI team who engage with shared knowledge
  • Diversity of knowledge type: are teams sharing only successes, or also failures, experiments that were abandoned, and partial results?
  • Knowledge utilisation: whether published knowledge is referenced or reused by other teams in their AI work

Formula

Knowledge Sharing Frequency = (Knowledge Sharing Events + Written Artefacts Published) / Team Quarter

Optional:

  • Reach score: (Unique participants in sharing events / Total AI-adjacent staff) × 100
  • Cross-team adoption rate: number of cases where one team's AI learning was explicitly applied by another team

Instrumentation Tips

  • Maintain a shared AI knowledge calendar visible across engineering and product organisations
  • Use a tagging system in the internal knowledge base to classify AI artefacts by type (experiment, incident retrospective, model card, tutorial) and topic
  • Track session attendance for community of practice events to measure cross-team reach over time
  • Assign knowledge sharing frequency as a team-level metric reviewed in quarterly team health assessments

Benchmarks

Metric Range Interpretation
≥ 2 sharing events per team per month + regular written artefacts Excellent — knowledge is flowing actively through the organisation
1 sharing event per team per month + some written artefacts Good — sharing is happening but could be more consistent and varied
Sharing is ad-hoc with no regular cadence Needs improvement — knowledge is likely siloing within teams
Little to no sharing visible Critical — AI knowledge is not propagating; significant duplication of effort and missed learning likely

Why It Matters

  • AI knowledge has a short half-life if not shared The insight a team gains from an experiment that does not work is extremely valuable — it prevents another team from spending two weeks on the same dead end. But this value evaporates if the insight stays in a private Slack thread.

  • Knowledge sharing is the primary mechanism for cross-team AI capability growth Most AI teams are too small and too specialised to develop all needed capability internally. Regular cross-team sharing accelerates capability development across the organisation at a fraction of the cost of individual team training.

  • Visible AI work builds confidence and trust in the AI programme Regular demonstrations of AI work — including experiments, prototypes, and deployed systems — build organisational literacy and trust in AI in ways that written strategy documents cannot. Knowledge sharing is an investment in the social licence for AI.

  • Sharing failures is a prerequisite for learning organisation status A culture where only successes are shared creates a distorted view of AI development. Teams that normalise sharing what did not work, and why, develop more realistic expectations, better risk models, and faster learning cycles.

Best Practices

  • Create a standing AI community of practice that meets regularly with a documented agenda and rotating responsibility for presenting
  • Establish an expectation that every completed experiment — regardless of outcome — produces a brief written retrospective published to the shared knowledge base
  • Recognise and celebrate knowledge sharing contributions publicly alongside product delivery achievements
  • Make it easy to share: a low-friction internal blog, a simple template for experiment retrospectives, and a standing slot in team ceremonies all reduce the activation energy for sharing
  • Invite external speakers — from industry, academia, or partner organisations — to cross-fertilise the internal AI community with outside perspectives

Common Pitfalls

  • Counting sharing events without measuring the quality or reach of shared content, rewarding volume over impact
  • Allowing the community of practice to become a showcase for completed successes only, discouraging sharing of in-progress work, failures, and uncertainties
  • Not creating time in team capacity plans for knowledge sharing activities, meaning they are deprioritised when delivery pressure is high
  • Building a knowledge base that becomes a write-only archive rather than a living reference that teams actively search and build upon

Signals of Success

  • The AI community of practice has a regular cadence with consistent attendance from multiple teams
  • The organisation's internal AI knowledge base has been created by contributors from more than three teams in the past quarter
  • At least one team has documented a case where they applied learning from another team's published experiment to their own work
  • Failed experiments are written up and shared with the same regularity as successful deployments

Related Measures

  • [[AI Team Psychological Safety Score]]
  • [[AI Technical Debt Ratio]]
  • [[Experiment-to-Production Cycle Time]]

Aligned Industry Research

  • Nonaka & Takeuchi — The Knowledge-Creating Company (Oxford University Press 1995) The foundational model of organisational knowledge creation distinguishes between tacit knowledge (know-how held in individuals) and explicit knowledge (documented, shareable artefacts). The socialisation and externalisation processes they describe are directly applicable to AI team knowledge sharing — tacit knowledge from failed experiments must be externalised to have organisational value.

  • Duhigg — Smarter Faster Better (Random House 2016) Duhigg's research on team cognition demonstrates that teams that develop explicit mechanisms for sharing mental models — including assumptions, uncertainties, and failures — outperform those relying on individual expertise alone, with direct implications for AI communities of practice and the value of structured knowledge sharing cadences.

Technical debt is like junk food - easy now, painful later.

Awesome Blogs
  • LinkedIn Engineering
  • Github Engineering
  • Uber Engineering
  • Code as Craft
  • Medium.engineering