PRA Model Risk Management

Last reviewed April 2026

In May 2024, the PRA's Supervisory Statement SS1/23 on model risk management took full effect, requiring banks to inventory, validate, monitor, and govern every material model in their organisation. For firms with hundreds or thousands of models, many of them machine learning systems, this is not a policy update. It is a structural change to how models are managed. PRA model risk management requirements set the regulatory standard for AI governance in UK banking and, by extension, for any financial institution that the PRA supervises.

What is PRA model risk management?

The PRA's model risk management framework, codified in SS1/23, establishes five principles that govern how banks identify, develop, validate, use, and govern models. The PRA defines a model broadly: any quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates. This definition is wide enough to capture machine learning systems, vendor-supplied scoring models, and spreadsheet-based calculators alongside traditional statistical models.

The five principles are: model identification and model inventory; model development, implementation, and use; model validation; model risk mitigants; and governance. Each principle is supported by detailed expectations that the PRA assesses during supervisory reviews. The framework builds on the US Federal Reserve's SR 11-7 guidance but is tailored to the UK regulatory context and explicitly addresses AI and machine learning models.

The scope is comprehensive. Every material model must be inventoried, with clear ownership and risk classification. Development must follow documented standards. Independent validation must be conducted proportionate to model materiality. Model risk must be quantified and managed as part of the firm's overall risk framework. And governance structures must ensure board-level oversight and senior management accountability.

The landscape

SS1/23 took full effect in May 2024, following a twelve-month implementation period. The PRA is now assessing firms' compliance through routine supervisory activities. Early indications suggest that the most common gaps are in model inventory completeness (firms underestimating the number of models in scope), validation backlogs (insufficient validation capacity to cover the expanded scope), and ML-specific governance (frameworks designed for traditional models that do not adequately address ML characteristics).

The PRA's approach to AI is embedded within SS1/23 rather than treated separately. The statement explicitly acknowledges that AI and ML models present specific challenges: opacity, data dependency, drift, and the need for continuous monitoring. But it does not create separate requirements for AI models. Instead, it expects firms to apply the same principles with appropriate adaptations: more frequent monitoring for models that drift, explainability testing for opaque models, and enhanced data governance for data-intensive systems.

The Senior Managers and Certification Regime ensures that an identified individual is accountable for the firm's model risk management. The PRA expects this individual to have sufficient understanding of the model portfolio, including AI models, to exercise effective oversight. This does not mean the SMF needs to understand gradient boosting. It means they need to understand the model inventory, the risk profile, the validation status, and the governance framework well enough to make informed decisions and challenge effectively.

How AI changes this

ML models require adaptations to each of the five principles. For inventory: the definition of "model" must capture ML systems that may not be labelled as such. For development: standards must address ML-specific concerns like training data quality, feature engineering, and hyperparameter tuning. For validation: methods must address ML-specific risks like overfitting, data leakage, and fairness. For risk mitigants: monitoring must address drift and continuous learning. For governance: structures must accommodate the faster development cycles and iterative nature of ML development.

Automated model inventory management tracks the ML model estate, capturing metadata about each model's purpose, architecture, data sources, performance, and governance status. This addresses the inventory principle at scale. Without automation, maintaining an accurate inventory of hundreds of ML models, each of which may be updated frequently, is operationally impractical.

Continuous monitoring platforms provide the ongoing performance tracking that SS1/23 requires. For ML models, this means monitoring for data drift (changes in input distributions), concept drift (changes in the relationship between inputs and outputs), and performance degradation across segments. Alerts trigger revalidation when thresholds are breached, ensuring that the validation principle is maintained between periodic reviews.

Model risk quantification for ML models is an emerging discipline. Traditional model risk quantification estimates the financial impact of model errors. For ML models, the error modes are different (distributional shift, bias amplification, adversarial vulnerability) and the quantification methods must reflect these. The PRA expects firms to develop quantification approaches that are appropriate for their ML portfolio, even as industry practice is still maturing.

What to know before you start

Conduct a gap assessment against all five principles. For each principle, assess your current capability for both traditional models and ML models. The gap is almost always wider for ML: traditional model governance practices do not automatically extend to ML, and the adaptations required are specific and technical. Prioritise the gaps that create the most regulatory exposure: inventory completeness and validation coverage are typical priority areas.

Validation capacity is the binding constraint for most firms. The expanded scope of SS1/23, combined with the growing ML model estate, creates validation demand that most firms' current capacity cannot meet. Options include training existing validators in ML techniques, hiring ML-specialist validators, outsourcing validation to specialist firms, and investing in automated validation tooling. Most firms will need a combination.

Engage your PRA supervisor on your implementation approach. The PRA has acknowledged that full compliance is a journey and has expressed willingness to discuss firms' implementation plans. A credible, prioritised plan that addresses the highest-risk gaps first is more defensible than a plan that attempts everything simultaneously. Early engagement also provides insight into the PRA's specific expectations for your firm's model portfolio.

Start with the model inventory. Principle 1, model identification and inventory, is the foundation for everything else. You cannot validate, monitor, or govern models you do not know about. Commission a comprehensive inventory exercise that captures all models meeting the PRA's definition, including shadow AI and vendor models. The inventory is the starting point and the single most valuable deliverable for SS1/23 compliance.

Last updated

Exploring AI for your organisation? There are fifteen minutes on the calendar.

Let’s build AI together
← Back to AI Glossary