AI Knowledge Sharing Frequency measures how often AI teams share learnings, experiment results, technique discoveries, failure analyses, and working demonstrations with the broader organisation — across formats including team showcases, written retrospectives, community of practice sessions, internal blog posts, and cross-team demos. It captures the organisational learning rate from AI work, not just the technical output.
AI knowledge is particularly prone to siloing. Experiment results that do not produce a deployed model are often never shared. Failure analyses are rarely written up. Techniques that one team finds useful take months to propagate across the organisation. This measure creates an expectation that AI knowledge is systematically shared, not just accumulated within individual teams — building institutional capability, avoiding duplicated effort, and creating the shared understanding that enables coordinated AI governance and strategy.
Knowledge Sharing Frequency = (Knowledge Sharing Events + Written Artefacts Published) / Team Quarter
Optional:
(Unique participants in sharing events / Total AI-adjacent staff) × 100| Metric Range | Interpretation |
|---|---|
| ≥ 2 sharing events per team per month + regular written artefacts | Excellent — knowledge is flowing actively through the organisation |
| 1 sharing event per team per month + some written artefacts | Good — sharing is happening but could be more consistent and varied |
| Sharing is ad-hoc with no regular cadence | Needs improvement — knowledge is likely siloing within teams |
| Little to no sharing visible | Critical — AI knowledge is not propagating; significant duplication of effort and missed learning likely |
AI knowledge has a short half-life if not shared The insight a team gains from an experiment that does not work is extremely valuable — it prevents another team from spending two weeks on the same dead end. But this value evaporates if the insight stays in a private Slack thread.
Knowledge sharing is the primary mechanism for cross-team AI capability growth Most AI teams are too small and too specialised to develop all needed capability internally. Regular cross-team sharing accelerates capability development across the organisation at a fraction of the cost of individual team training.
Visible AI work builds confidence and trust in the AI programme Regular demonstrations of AI work — including experiments, prototypes, and deployed systems — build organisational literacy and trust in AI in ways that written strategy documents cannot. Knowledge sharing is an investment in the social licence for AI.
Sharing failures is a prerequisite for learning organisation status A culture where only successes are shared creates a distorted view of AI development. Teams that normalise sharing what did not work, and why, develop more realistic expectations, better risk models, and faster learning cycles.
Nonaka & Takeuchi — The Knowledge-Creating Company (Oxford University Press 1995) The foundational model of organisational knowledge creation distinguishes between tacit knowledge (know-how held in individuals) and explicit knowledge (documented, shareable artefacts). The socialisation and externalisation processes they describe are directly applicable to AI team knowledge sharing — tacit knowledge from failed experiments must be externalised to have organisational value.
Duhigg — Smarter Faster Better (Random House 2016) Duhigg's research on team cognition demonstrates that teams that develop explicit mechanisms for sharing mental models — including assumptions, uncertainties, and failures — outperform those relying on individual expertise alone, with direct implications for AI communities of practice and the value of structured knowledge sharing cadences.