Risk Assessment
Last reviewed April 2026
Most organisations have a risk register. Few have a risk taxonomy that is consistent across business units, let alone one connected to the data systems that would make automated risk assessment possible. The prerequisite for AI in risk management is not better algorithms. It is cleaner categories.
What is risk assessment?
Risk assessment is the process of identifying potential risks, evaluating their likelihood and impact, and determining how to manage them. In financial services, this spans credit risk, market risk, operational risk, liquidity risk, conduct risk, and increasingly, climate risk, cyber risk, and model risk. Each has its own regulatory framework, its own measurement methodology, and often its own organisational silo.
The challenge is not that organisations lack risk data. It is that risk data is fragmented. Credit risk sits in the lending system. Operational risk events are captured in a GRC platform. Market risk is monitored by the trading desk's risk engine. Conduct risk indicators are scattered across HR systems, complaints databases, and regulatory correspondence. Bringing these together into a coherent enterprise risk view is a data integration problem that most organisations have been working on for years with limited success.
The post-2008 regulatory framework, Basel III for banks, Solvency II for insurers, demands quantitative risk assessment with validated models and documented assumptions. But the qualitative aspects of risk, emerging risks, scenario analysis, risk culture, remain dependent on human judgement. AI does not replace this judgement; it augments it by processing more data, identifying patterns across risk categories, and generating scenarios that humans can evaluate.
The landscape
Third-party and concentration risk have been elevated by events like the CrowdStrike outage in 2024 and the SVB collapse in 2023. Regulators are asking firms to demonstrate that they understand their dependency chains: which critical services depend on which providers, what happens when a key provider fails, and how quickly operations can recover. This is a data lineage and mapping problem before it is a modelling problem.
The PRA's SS1/23 on model risk management codifies expectations that were previously guidance. Models used for risk assessment, including AI models, must be inventoried, validated, monitored, and governed with clear accountability. The definition of "model" is broad enough to include machine learning systems used for risk scoring, scenario generation, and anomaly detection. If it produces an output that informs a risk decision, it is a model and it is in scope.
Climate risk has moved from the corporate responsibility department to the risk function. TCFD-aligned disclosure is mandatory for large UK firms. The PRA expects insurers and banks to integrate climate scenarios into their risk assessment processes. The challenge is that climate risk cuts across existing risk categories: it is a credit risk (via physical damage to collateral), a market risk (via asset repricing), an operational risk (via supply chain disruption), and an insurance risk (via changing loss patterns). Managing it requires cross-functional coordination that most risk frameworks are not designed to support.
How AI changes this
The most valuable near-term application is connecting disparate risk data sources into a unified view. Natural language processing can extract risk-relevant information from unstructured sources: board papers, audit reports, regulatory correspondence, and incident reports. This does not require sophisticated modelling; it requires good information extraction, classification, and integration into existing risk frameworks.
Generative AI for scenario generation is the application that risk leaders find most compelling. Traditional stress testing uses predefined scenarios: a 30 per cent equity market decline, a 200-basis-point interest rate shock, a pandemic. These scenarios are useful but limited. GenAI can generate novel, plausible scenarios that combine multiple risk factors in ways that a human team might not consider: a cyber attack on a critical infrastructure provider coinciding with a severe weather event and a regulatory investigation, for example. The risk team's role shifts from constructing scenarios to evaluating and selecting from AI-generated ones.
Underwriting and risk assessment are converging, particularly in insurance. AI models that assess individual risk at the point of underwriting also contribute to portfolio-level risk assessment. The credit scoring models used in banking serve a similar dual purpose: individual credit decisions and portfolio-level credit risk measurement. Designing these models for both uses from the outset avoids the duplication that arises when risk and business functions build separate models on the same data.
What to know before you start
Taxonomy first, technology second. If your risk categories are inconsistent across business units, if different teams use different definitions for the same risk type, AI will amplify the inconsistency rather than resolve it. Invest in a clean, agreed risk taxonomy before deploying AI to assess against it. This is unglamorous work but it is the foundation on which everything else depends.
Risk models are among the most heavily regulated model classes. The validation requirements for a model used in capital calculation or stress testing are more demanding than for a model used in marketing or operations. Ensure your model risk management framework, including validation, monitoring, and governance, is designed for regulatory-grade models before deploying AI in risk assessment.
Emerging risk identification is the application where AI adds the most value relative to human effort. Monitoring regulatory publications, industry incident reports, peer disclosures, and macroeconomic indicators for signals of emerging risks is a task that humans do sporadically and AI can do continuously. Start here: it is lower regulatory risk than deploying AI in capital modelling, and it directly supports the board's risk oversight function.
Integration with data governance is essential, not optional. Risk models are only as good as the data that feeds them. If data quality, lineage, and access controls are not in place, the risk assessment outputs will not be trusted by the board or the regulator. Treat data governance as a prerequisite, not a parallel workstream.
Last updated
Exploring AI for your organisation? There are fifteen minutes on the calendar.
Let’s build AI together