• Home
  • BVSSH
  • C4E
  • Playbooks
  • Frameworks
  • Good Reads
Search

What are you looking for?

Standard : Model complexity is proportionate to the problem being solved

Purpose and Strategic Importance

This standard establishes that the complexity of an AI model — in terms of architecture, compute requirements, interpretability, and operational overhead — must be justified by the problem it is solving. It supports the policy of prioritising AI use cases by impact and not novelty by countering the engineering tendency to reach for the most sophisticated tool regardless of whether the problem demands it. A logistic regression that solves the problem reliably, cheaply, and explainably is often more valuable than a large neural network that solves it marginally better at ten times the cost.

Strategic Impact

  • Reduces the total cost of ownership of the AI portfolio by aligning infrastructure spend to actual model requirements
  • Improves model interpretability and auditability by favouring simpler architectures where the problem allows
  • Accelerates delivery by steering teams away from complex solutions that take longer to build, train, and maintain
  • Reduces operational risk by limiting the failure surface area to what the problem genuinely demands
  • Creates a culture of engineering pragmatism that values fitness for purpose over technical prestige

Risks of Not Having This Standard

  • Engineering teams over-engineer solutions that consume disproportionate compute, time, and maintenance effort
  • Simpler, more explainable models are passed over in favour of complex architectures that are harder to govern
  • Operational costs for AI infrastructure escalate because model complexity is not challenged at design time
  • Models that are difficult to interpret undermine compliance and audit requirements in regulated contexts
  • Teams become dependent on complex infrastructure they cannot maintain independently, creating vendor lock-in

CMMI Maturity Model

Level 1 – Initial

Category Description
People & Culture - Engineers default to the most complex or fashionable model architecture without considering simpler alternatives
Process & Governance - No design review examines whether model complexity is justified by the problem; decisions are made by individual engineers
Technology & Tools - Tooling choices are driven by what the team finds interesting rather than what the problem requires
Measurement & Metrics - Model complexity is not measured; cost-to-performance ratio is not tracked

Level 2 – Managed

Category Description
People & Culture - Teams discuss model architecture choices at design stage; simpler baselines are attempted before complex models
Process & Governance - A baseline-first rule is informally established: try a simple model before proposing a complex one
Technology & Tools - Teams document their model selection rationale including alternatives considered and reasons for rejection
Measurement & Metrics - Performance-per-compute-cost is tracked informally; teams are aware of the trade-off

Level 3 – Defined

Category Description
People & Culture - Model complexity is a standard design review criterion; engineers are expected to justify complexity with evidence of performance need
Process & Governance - A defined proportionality framework requires teams to document the performance delta between simple and complex candidate models before approval
Technology & Tools - Automated experiment tracking captures performance across model complexity tiers; results are available for design review
Measurement & Metrics - Complexity-adjusted performance metrics are reported; cost-per-unit-of-value is tracked alongside accuracy

Level 4 – Quantitatively Managed

Category Description
People & Culture - Teams are accountable for total cost of ownership estimates at model design time; complexity decisions are reviewed at architecture governance
Process & Governance - Model complexity tiers are defined with associated infrastructure cost profiles; business cases must justify the tier selected
Technology & Tools - Cost modelling tools estimate training, inference, and maintenance costs by model complexity tier at design time
Measurement & Metrics - Actual versus estimated cost per prediction is tracked; efficiency ratios are reviewed quarterly and used to challenge over-engineered systems

Level 5 – Optimising

Category Description
People & Culture - Simplicity is celebrated as an engineering virtue; teams share examples of problems solved elegantly with simple models
Process & Governance - Complexity standards are continuously refined based on emerging lightweight architectures and evolving cost benchmarks
Technology & Tools - Model distillation, quantisation, and pruning practices are systematically applied to reduce complexity without sacrificing necessary performance
Measurement & Metrics - Portfolio-level complexity metrics are tracked; the organisation monitors the proportion of compute consumed by over-engineered models and drives it down

Key Measures

  • Percentage of AI projects where a simpler baseline model was evaluated before the production architecture was selected
  • Average performance delta between the selected model and the simplest viable alternative
  • Cost per prediction across the AI portfolio tracked against benchmark targets per use case category
  • Proportion of deployed models that exceed their initial infrastructure cost estimates due to unplanned complexity
  • Rate at which existing models are refactored to simpler architectures following post-deployment cost review
Associated Policies
Associated Practices
  • Transfer Learning and Fine-Tuning
  • Feature Engineering and Selection

Technical debt is like junk food - easy now, painful later.

Awesome Blogs
  • LinkedIn Engineering
  • Github Engineering
  • Uber Engineering
  • Code as Craft
  • Medium.engineering