AI Risk Assessment
Last reviewed April 2026
A business unit wants to deploy an AI model that automates 60 per cent of claims triage decisions. The model has been tested and performs well. But nobody has assessed what happens when it performs badly: which customers are affected, what the financial exposure is, whether the regulator will object, or how the firm would detect and respond to a failure. AI risk assessment is the structured process that asks these questions before deployment, not after the first incident.
What is AI risk assessment?
AI risk assessment is the systematic evaluation of the risks associated with an AI system, covering technical risks (model failure, data quality), operational risks (process disruption, dependency failures), regulatory risks (non-compliance, supervisory action), ethical risks (discrimination, lack of transparency), and reputational risks (customer harm, public backlash). It is conducted before deployment and reviewed periodically throughout the system's lifecycle.
In financial services, AI risk assessment connects to the broader operational risk and model risk management frameworks. The risk categories are familiar: what can go wrong, how likely is it, how severe is the impact, and what controls mitigate the risk. The AI-specific dimensions are the ways in which AI systems fail differently from traditional systems: they degrade silently as data drifts, they can discriminate without intent, and they can be opaque in ways that make failure diagnosis difficult.
A good risk assessment produces a risk tier that determines the governance controls required. A low-risk internal tool requires basic documentation and periodic review. A high-risk customer-facing decision system requires independent validation, continuous monitoring, human oversight, and regulatory notification. The risk tier drives the investment in controls, ensuring proportionality.
The landscape
The EU AI Act mandates risk assessment for high-risk AI systems through its conformity assessment process. Providers of high-risk systems must identify foreseeable risks, implement risk mitigation measures, and document their risk management throughout the system's lifecycle. For financial institutions deploying AI in credit, insurance, or fraud detection, this creates a mandatory risk assessment requirement with prescribed elements.
The PRA and FCA expect firms to assess AI risks within their existing risk management frameworks, applying the same rigour used for other operational and model risks. The PRA's SS1/23 requires that models be risk-assessed and that the depth of governance, validation, and monitoring be proportionate to the risk. Firms that treat all AI systems identically, either with excessive governance on low-risk tools or insufficient governance on high-risk systems, are not meeting the proportionality principle.
The NIST AI Risk Management Framework provides a structured approach that many financial institutions are adapting. Its "Map, Measure, Manage, Govern" structure aligns well with financial services risk frameworks, and its catalogue of AI-specific risks provides a useful starting taxonomy. However, NIST alone is insufficient for UK and EU regulatory compliance; it must be supplemented with jurisdiction-specific requirements.
How AI changes this
Structured risk assessment templates standardise the evaluation process across the organisation. A questionnaire covering data sources, model type, deployment context, affected populations, regulatory requirements, and failure modes produces a consistent risk score that can be compared across use cases. This prevents the common problem where two similar AI projects receive different governance treatment because they were assessed by different teams with different standards.
Automated risk scoring uses the questionnaire responses to compute a risk tier, applying predefined rules that map risk factors to governance requirements. A customer-facing credit model that uses personal data and makes consequential decisions automatically receives a high-risk classification. An internal tool that summarises meeting notes receives a low-risk classification. The automation ensures consistency and reduces the time from proposal to governance decision.
Scenario-based risk assessment uses AI to generate failure scenarios that may not be immediately obvious. What if the training data contains a bias that the model amplifies? What if a third-party data source changes its methodology? What if the model performs well on average but poorly for a specific customer segment? AI-assisted scenario generation broadens the risk assessment beyond the risks that the assessors have personally experienced.
Continuous risk monitoring tracks risk indicators after deployment, updating the risk assessment as new information emerges. A model that was medium-risk at deployment may become high-risk if its usage expands, if the regulatory environment changes, or if monitoring reveals performance issues. Dynamic risk assessment reflects the reality that risk is not fixed at the point of deployment.
What to know before you start
Risk assessment must happen before development, not after. If a use case is assessed as high-risk after the model has been built, the governance requirements may necessitate redesign, which wastes the development investment. Embed risk assessment at the use case approval stage, before data is sourced or models are developed. This ensures that the governance requirements are known from the outset and built into the project plan.
Involve the right people. A risk assessment conducted solely by the data science team will miss regulatory risks, reputational risks, and operational risks that sit outside their expertise. Include compliance, legal, operations, and the business owner in the assessment. The fifteen minutes each person spends reviewing the assessment is cheaper than discovering a missed risk in production.
The risk assessment should be a living document. Schedule reviews at defined intervals (annually for high-risk, biennially for lower-risk) and trigger ad hoc reviews when material changes occur: new data sources, model retraining, expanded use, or regulatory changes. A risk assessment that sits in a drawer until the next audit provides no ongoing protection.
Start by defining your risk assessment framework: the risk categories, the scoring methodology, the tier definitions, and the governance controls for each tier. Then apply it retrospectively to existing AI systems. The retrospective assessment will reveal gaps in your current AI governance and provide a prioritised remediation plan. New AI projects should be assessed from proposal stage using the same framework.
Last updated
Exploring AI for your organisation? There are fifteen minutes on the calendar.
Let’s build AI together