High-Risk AI System
Last reviewed April 2026
Not all AI systems are created equal in the eyes of the law. A chatbot that answers product queries and a model that decides whether someone gets a mortgage carry fundamentally different risks. The EU AI Act draws a line between them, and most financial services AI falls on the regulated side. A high-risk AI system is one that the law deems significant enough to warrant mandatory governance, documentation, and oversight, and understanding the classification is the first step in compliance planning.
What is a high-risk AI system?
Under the EU AI Act, a high-risk AI system is one that is listed in Annex III of the Act or is a safety component of a product covered by existing EU harmonisation legislation. For financial services, the relevant Annex III categories are explicit: AI systems used to evaluate creditworthiness or credit scores, AI systems used for risk assessment and pricing in life and health insurance, and AI systems intended to be used for the evaluation and classification of emergency calls. Fraud detection systems used in financial services contexts are also captured under the broader "law enforcement" category where they inform investigatory decisions.
The high-risk classification triggers the full set of obligations under the Act: risk management, data governance, technical documentation, record-keeping, transparency, human oversight, and accuracy and robustness requirements. These are not optional best practices. They are legal obligations with significant penalties for non-compliance. The obligations apply to both providers (developers) and deployers (users) of high-risk systems, though the specific obligations differ by role.
The classification is based on the system's intended purpose, not its technical architecture. A simple logistic regression used for credit scoring is a high-risk system. A complex neural network used to recommend internal training courses is not. The consequence of the decision, not the sophistication of the technology, determines the risk level. This principle-based classification aligns with how financial services regulators think about risk.
The landscape
The Act provides a mechanism for amending the high-risk list through delegated acts, meaning additional AI use cases can be added as the regulatory understanding of AI risks evolves. Financial services firms should monitor the European Commission's review process for potential expansions. Use cases that are not currently classified as high risk, such as customer service AI or marketing analytics, could be reclassified if the Commission determines they pose sufficient risk.
The FCA and PRA have not adopted the EU's classification framework for UK domestic regulation. However, for UK firms operating in the EU, the classification determines which of their systems must comply with EU AI Act requirements. The practical effect is a dual framework: PRA model risk management applies domestically, and EU AI Act high-risk requirements apply to EU-deployed systems.
Harmonised standards being developed by CEN and CENELEC will provide detailed technical specifications for compliance with each high-risk requirement. These standards, expected to be finalised during 2025 and 2026, will give firms concrete benchmarks against which to assess their systems. Until the standards are published, firms must comply with the Act's requirements as written, which are principles-based and subject to interpretation.
How AI changes this
Automated classification tools help organisations assess whether their AI systems qualify as high risk under the Act. A structured questionnaire covering the system's purpose, the decisions it informs, the data it processes, and the populations it affects produces a classification recommendation. This ensures consistent classification across the organisation and prevents systems from being incorrectly classified as lower risk to avoid governance obligations.
Compliance management platforms track the status of each high-risk system against the Act's requirements. For each system, the platform records whether the risk management process is documented, whether data governance practices are in place, whether technical documentation is complete, whether record-keeping is configured, whether transparency obligations are met, and whether human oversight mechanisms are operational. Dashboard views show the compliance posture across the portfolio.
The conformity assessment process for high-risk systems in financial services is primarily self-assessment. Providers assess their own compliance and maintain documentation to demonstrate it. This self-assessment must be thorough and evidence-based, because supervisory authorities can request the documentation at any time. Automated evidence collection, linking each requirement to the supporting artefacts (validation reports, data quality assessments, monitoring logs), simplifies the assessment process.
Post-market monitoring, required by Article 72, ensures that high-risk systems continue to comply after deployment. This aligns with the model risk management practice of ongoing monitoring but adds specific obligations: monitoring must be "systematic," the results must be documented, and serious incidents must be reported to the relevant authority. For financial services, this means integrating EU AI Act monitoring requirements into existing model monitoring frameworks.
What to know before you start
Classify every AI system in your inventory against the EU AI Act's risk tiers. The starting point is your AI use case inventory. For each system, assess whether it falls within Annex III's high-risk categories. Document the classification rationale. Where the classification is ambiguous (a system that assists but does not make credit decisions, for example), document the reasoning and seek legal advice. Classification errors in either direction, over- or under-classifying, carry costs.
Gap analysis is the next step. For each system classified as high risk, assess the current state against each of the Act's requirements. The gaps will typically cluster around documentation (more extensive than current practice), data governance (specifically the requirement for representative and bias-free training data), and human oversight (specifically the design requirement for effective human intervention, not just human review).
Build compliance into the development lifecycle. For new AI systems, the high-risk requirements should be addressed from the design stage. Controls framework checkpoints should verify classification, documentation completeness, data governance compliance, and oversight mechanisms before deployment. Retrofitting compliance into existing systems is more expensive and carries higher risk of gaps.
Start with the systems that affect EU customers most directly: credit scoring, insurance pricing, and fraud detection for EU-domiciled individuals. These are the systems where non-compliance carries the highest regulatory and financial risk. Achieve compliance for these priority systems first, then extend to the broader portfolio.
Last updated
Exploring AI for your organisation? There are fifteen minutes on the calendar.
Let’s build AI together