Predictive Analytics
Last reviewed April 2026
The model is accurate. The predictions are sound. And nobody uses them. The most common failure mode of predictive analytics in financial services is not a technical one. It is the gap between a prediction and a decision, between a model that correctly forecasts an outcome and a workflow that actually acts on it.
What are predictive analytics?
Predictive analytics uses historical data and statistical or machine learning models to forecast future outcomes. In financial services, this encompasses credit default prediction, customer churn forecasting, claims cost estimation, market movement prediction, fraud propensity scoring, and operational demand forecasting. The promise is that decisions informed by data are better than decisions informed by intuition alone.
The technical components are well established. Feature engineering, model training, validation, and deployment follow well-documented patterns. The tooling is mature. Cloud platforms provide managed machine learning services that reduce the infrastructure overhead. Open-source libraries cover every major algorithm and evaluation technique. The technical barriers to building a predictive model have never been lower.
The problem has shifted from "can we build an accurate model" to "can we embed the model's output into a decision workflow where someone acts on it." A churn prediction model that identifies customers likely to leave is useful only if there is a retention process that triggers when the model flags a customer. A claims cost prediction is useful only if the reserving team uses it. The last mile, from prediction to action, is where most predictive analytics programmes stall.
The landscape
The shift from predictive to prescriptive analytics is the current frontier. Predictive analytics tells you what is likely to happen. Prescriptive analytics tells you what to do about it. The difference is causal inference: understanding not just that a customer is likely to churn, but which intervention is most likely to prevent it. This requires a different modelling approach, experimental design, causal models, and counterfactual analysis, that is more demanding than standard prediction but substantially more valuable to the business.
Foundation models are compressing the time-to-value for predictive analytics. Pre-trained models fine-tuned on domain-specific data can produce useful predictions with a fraction of the training data and development time that traditional approaches require. For financial services, this means that use cases previously considered too niche to justify the investment, predicting which commercial insurance renewals will be contested, for example, become viable because the development cost has dropped significantly.
Model operations (MLOps) has matured from an aspiration to a discipline. Automated model monitoring, drift detection, retraining pipelines, and A/B testing infrastructure are now available as products rather than requiring custom engineering. The gap between building a model and operating a model in production is narrowing, but it has not closed. Financial services organisations still underestimate the ongoing cost of model maintenance relative to the initial development cost.
How AI changes this
Foundation models reduce the cold-start problem. Previously, building a predictive model for a new use case required months of data collection, feature engineering, and model training. Foundation models pre-trained on broad financial data can be fine-tuned on smaller, domain-specific datasets to produce useful predictions in weeks. This changes the economics of predictive analytics: more use cases become viable because the per-use-case cost drops.
Automated feature engineering discovers predictive signals in data that human analysts would miss. An AI system that systematically explores feature combinations, interaction effects, and temporal patterns across a dataset can identify predictive features that would take a data scientist months to discover through manual exploration. The resulting models are often more accurate and more robust than those built with human-engineered features alone.
Real-time prediction is replacing batch prediction for an expanding set of use cases. Credit scoring at the point of application, fraud detection at the point of payment, and churn intervention at the point of customer contact all require predictions in milliseconds, not hours. The infrastructure for real-time model serving has matured, but the organisational processes that consume predictions often have not. A real-time prediction served to a call centre agent is wasted if the agent has no authority or script to act on it.
The connection to actuarial modelling is direct. Predictive models that forecast claims frequency and severity at a granular level inform actuarial pricing and reserving. The distinction between a predictive analytics model and an actuarial model is increasingly one of governance and validation rather than technique. Organisations that manage these as a single model portfolio avoid duplication and improve consistency.
What to know before you start
Define the decision before you build the model. What decision will change based on the model's output? Who makes that decision? What is their current process? If you cannot answer these questions, you do not have a predictive analytics use case; you have an interesting data science project. The distinction matters because use cases generate ROI and projects consume budget.
Model accuracy is necessary but not sufficient. A model with 85 per cent accuracy that is embedded in a workflow and acted upon generates more value than a model with 95 per cent accuracy that produces a daily report nobody reads. Invest in the integration, the user interface, and the change management as much as in the model itself.
Data governance is the prerequisite, not a parallel workstream. If the data feeding your predictive model is inaccurate, incomplete, or inconsistently defined across sources, the model's predictions will be unreliable. Worse, they will be unreliable in ways that are difficult to diagnose. Invest in data quality for the specific datasets your model uses before investing in model sophistication.
Start with a use case where the prediction-to-action path is short and the action is already defined. Customer churn prediction with an existing retention programme, claims cost prediction for an existing reserving process, or fraud propensity scoring for an existing investigation workflow. The model adds value because the action infrastructure already exists. Building the model and the action infrastructure simultaneously doubles the project risk and timeline.
Last updated
Exploring AI for your organisation? There are fifteen minutes on the calendar.
Let’s build AI together