Practice : Feasibility and Data Readiness Assessment
Purpose and Strategic Importance
The most common cause of AI project failure is not insufficient ML expertise or inadequate modelling — it is insufficient data. Teams discover midway through development that their training data is too sparse, too noisy, or too biased to support the model they are trying to build. Feasibility and Data Readiness Assessment moves this discovery to the front of the process, enabling teams to make informed go/no-go decisions about AI investments before significant time and resource is committed.
Feasibility assessment also challenges the assumption that AI is always the right solution. A realistic evaluation of the technical requirements for AI to succeed — sufficient labelled data, a well-defined learning signal, a stable enough environment for a trained model to generalise — frequently reveals that simpler solutions would perform better with less investment. This is valuable intelligence that protects the organisation from AI initiatives that are appealing in concept but not viable in practice.
Description of the Practice
- Assesses the availability, volume, quality, and accessibility of data required for the proposed AI use case, including the feasibility of obtaining sufficient labelled examples.
- Evaluates technical feasibility: whether the problem is well-defined enough to be learned from data, whether the learning signal is strong enough, and whether the target variable can be reliably measured.
- Estimates the scale of data infrastructure investment required — storage, processing, labelling — and validates that it is proportionate to the expected value of the use case.
- Identifies regulatory and ethical constraints on data use — consent, purpose limitation, data residency — that may constrain the scope or approach of the AI system.
- Produces a structured feasibility report that informs the go/no-go decision and, if go, provides the foundation for the data strategy for the development phase.
How to Practise It (Playbook)
1. Getting Started
- Define the minimum data requirements for the use case: what is the target variable, how many labelled examples are needed as a minimum, and what input features are necessary?
- Conduct a data inventory for the use case — cataloguing available data sources, their format, volume, quality, and accessibility — without assuming that data that exists is actually usable.
- Run a quick data quality assessment on the most important candidate data sources to identify showstopper quality issues before investing further.
- Produce a one-page feasibility summary that clearly states what data is available, what is missing, and the team's assessment of whether the use case is viable to proceed.
2. Scaling and Maturing
- Develop a standardised feasibility assessment template that guides teams through the technical, data, and regulatory dimensions systematically, enabling consistent quality assessments across use cases.
- Build a data readiness scoring model that provides a quantitative assessment of data quality, volume, and accessibility, enabling comparisons across candidate use cases.
- Create a data readiness playbook that guides teams through common data gaps — insufficient volume, missing labels, data access barriers — with standard approaches for addressing each.
- Track the correlation between feasibility assessment scores and actual project outcomes, using outcome data to improve the accuracy of assessments over time.
3. Team Behaviours to Encourage
- Be honest about data limitations in feasibility assessments — an assessment that overestimates data readiness to proceed to development wastes more resources than an honest no-go recommendation.
- Separate the feasibility question from the advocacy question — the person conducting the assessment should not have a stake in a particular outcome, as this creates bias towards optimistic assessments.
- Identify data gaps as problems to be solved rather than reasons to abandon use cases, exploring data acquisition, synthetic data, and labelling strategies as potential paths forward.
- Engage data engineers and data governance specialists in feasibility assessments, not just data scientists — data access and compliance constraints are often the decisive factors.
4. Watch Out For…
- Feasibility assessments that evaluate only technical considerations while ignoring regulatory and ethical constraints on data use that could block the use case entirely.
- Optimistic assessments of data quality that are based on sampling rather than systematic analysis — representative quality assessment requires examining the full dataset, not just the clean rows.
- Treating feasibility assessment as a one-time activity rather than an iterative process — new information often emerges during early development that warrants revisiting the feasibility conclusion.
- Allowing feasibility assessment to be compressed under delivery pressure, producing assessments that satisfy a process requirement without providing genuine decision support.
5. Signals of Success
- Feasibility assessments are completed for every use case before development begins, and the assessments provide genuine go/no-go decision support rather than post-hoc justifications.
- The assessments have recommended against proceeding with use cases where data readiness is insufficient, preventing wasteful development investment.
- Teams have a clear process for addressing common data gaps identified during assessment, with known paths to data acquisition, synthetic generation, or use case scope reduction.
- Development projects that proceed following a positive feasibility assessment have significantly higher data-related success rates than historical projects that skipped this stage.
- Feasibility assessment outputs are used as inputs to data strategy and infrastructure investment planning, making them valuable beyond individual project decisions.