• Home
  • BVSSH
  • C4E
  • Playbooks
  • Frameworks
  • Good Reads
Search

What are you looking for?

Practice : Inner-Source for AI Components

Purpose and Strategic Importance

AI teams across a large organisation frequently build the same capabilities independently: data connectors to common internal systems, evaluation harnesses for common model types, feature pipelines for shared data sources, fairness evaluation libraries, and experiment tracking utilities. This duplication wastes engineering capacity and produces inconsistent implementations of capabilities that should be standardised — such as fairness evaluation or safety testing — creating the risk of different teams making different implicit decisions about quality thresholds.

Inner-sourcing AI components — making reusable AI libraries, datasets, and tooling available across teams through an open contribution model — solves both problems simultaneously. It concentrates expertise in shared components that improve with use, it reduces duplicated effort, and it creates a forcing function for standards: when fairness evaluation is provided by a shared library rather than implemented independently by each team, it is far easier to ensure that all teams are using consistent, reviewed methodologies.


Description of the Practice

  • Identifies AI components — data pipelines, feature stores, evaluation harnesses, training utilities, model wrappers — that are candidates for inner-sourcing based on demand from multiple teams and potential for standardisation.
  • Establishes a shared component repository with clear contribution guidelines, documentation standards, versioning practices, and review processes modelled on open-source software practices.
  • Designates maintainers for shared components who are accountable for reviewing contributions, maintaining documentation, managing versioning, and responding to consumer requests.
  • Promotes inner-source components across teams through regular showcases, onboarding documentation, and integration into team-level AI toolkits and standards.
  • Measures inner-source adoption and contribution activity, using this data to identify the most valued components and the areas where investment in shared infrastructure would have the highest impact.

How to Practise It (Playbook)

1. Getting Started

  • Survey AI teams to identify which components they have built independently that solve problems other teams also face — these are the highest-priority candidates for inner-sourcing.
  • Start with one or two components that have the clearest value and the most immediate interest from multiple teams, rather than attempting to inner-source everything simultaneously.
  • Establish a shared code repository with basic contribution guidelines, a review process, and versioning conventions before inviting contributions, ensuring that the infrastructure for inner-sourcing is in place before the first contribution.
  • Invest in documentation for the first shared components — usage guides, examples, API documentation — recognising that components without documentation are not actually reusable in practice.

2. Scaling and Maturing

  • Build a component discovery mechanism — a searchable catalogue, a developer portal, or a curated index — that makes it easy for teams to find shared components and understand what they offer.
  • Develop contribution pathways that make it practical for teams to contribute to shared components without requiring expertise in all the systems and standards involved, lowering the barrier to participation.
  • Create incentive structures that recognise and reward inner-source contributions, preventing a free-rider problem where teams consume shared components but do not contribute to maintaining or improving them.
  • Establish a governance model for shared components that balances the needs of component producers and consumers, ensuring that components evolve in ways that serve the organisation's shared interests rather than the preferences of individual teams.

3. Team Behaviours to Encourage

  • Check the inner-source catalogue before building new AI tooling — the habit of looking for existing solutions before building new ones is the foundation of effective inner-sourcing.
  • Contribute improvements back to shared components when you fix bugs or add capabilities for your own use, rather than maintaining private forks that diverge from the shared baseline.
  • Treat inner-source maintainers as partners rather than service providers — they are maintaining shared infrastructure on behalf of the organisation, and they deserve prompt, constructive engagement when issues arise.
  • Share AI datasets, trained model artefacts, and labelled examples through inner-source channels where appropriate, not just code — data is often the most valuable AI asset and the one most commonly duplicated.

4. Watch Out For…

  • Inner-source components that are contributed but not maintained, accumulating technical debt and outdated dependencies that make them increasingly dangerous to use over time.
  • Catalogue sprawl — too many components, poorly documented and inconsistently maintained — that makes the inner-source ecosystem harder to navigate than building from scratch.
  • Contribution processes that are so burdensome that teams maintain private implementations rather than contributing, defeating the purpose of inner-sourcing.
  • Inner-source governance that prioritises the preferences of the component's original producer over the needs of the broader consumer community, creating a bottleneck that limits adoption.

5. Signals of Success

  • Multiple teams are actively using inner-sourced AI components, with measurable reduction in duplicated implementation effort across the organisation.
  • Shared components are receiving contributions from teams other than the original producers, demonstrating genuine inner-source dynamics rather than one-way distribution.
  • AI standards — particularly for fairness evaluation, safety testing, and model governance — are enforced consistently across teams through shared libraries rather than requiring teams to implement them independently.
  • Teams report that finding and using inner-sourced components is easy and valuable, with clear documentation and responsive maintainers — the developer experience of inner-sourced AI components is treated as a first-class quality concern.
  • The organisation can demonstrate cost savings and quality improvements attributable to inner-sourcing, building the evidence base for sustained investment in shared AI infrastructure.
Associated Standards
  • AI tooling is selected with developer experience as a primary criterion
  • AI teams operate with clear ownership and psychological safety
  • AI work is recognised and celebrated as a team achievement

Technical debt is like junk food - easy now, painful later.

Awesome Blogs
  • LinkedIn Engineering
  • Github Engineering
  • Uber Engineering
  • Code as Craft
  • Medium.engineering