EU AI Act
Last reviewed April 2026
The EU AI Act is the first comprehensive AI law in the world, and it classifies financial services AI as high risk. Credit scoring, insurance pricing, fraud detection, and claims assessment all fall within its scope. For UK financial institutions with EU operations, the Act creates mandatory obligations that will reshape how AI is built, documented, and governed. EU AI Act compliance is not optional for firms that serve EU customers, and its influence on global AI governance standards means it will shape expectations even in jurisdictions that do not adopt it.
What is the EU AI Act?
The EU AI Act is a regulation that establishes a harmonised legal framework for AI across the European Union. Adopted in 2024 with phased implementation through 2027, it classifies AI systems by risk level and imposes obligations proportionate to that risk. The classification system has four tiers: unacceptable risk (banned), high risk (strict obligations), limited risk (transparency obligations), and minimal risk (no specific obligations).
Financial services AI falls predominantly into the high-risk category. Article 6 and Annex III explicitly identify AI systems used for creditworthiness assessment, credit scoring, risk assessment and pricing in life and health insurance, and fraud detection as high risk. This means these systems must comply with the full set of high-risk requirements: risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, and robustness.
The Act applies to providers (those who develop or place AI systems on the market) and deployers (those who use AI systems in their operations). A UK bank that develops its own credit scoring model and deploys it for EU customers is both a provider and a deployer. A UK bank that uses a vendor's credit scoring model for EU customers is a deployer. Both roles carry obligations, though the provider's obligations are more extensive.
The landscape
The implementation timeline is phased. Prohibited AI practices (social scoring, real-time biometric identification in public spaces for law enforcement, with exceptions) applied from February 2025. The general-purpose AI model rules and governance structures applied from August 2025. The high-risk system requirements, the ones most relevant to financial services, apply from August 2026. This gives firms approximately one year from the date of this writing to achieve compliance for their high-risk AI systems.
The Act is enforced at both EU and national level. National competent authorities, designated by each member state, supervise AI providers and deployers within their jurisdictions. The European AI Office, established within the European Commission, coordinates enforcement and oversees general-purpose AI models. For financial services, the existing financial supervisory authorities (national regulators, the ECB, ESMA, EIOPA) are expected to play a role, though the exact division of responsibility between AI authorities and financial regulators is still being clarified.
Penalties are significant. Non-compliance with the high-risk requirements can result in fines of up to 15 million euros or 3 per cent of global annual turnover, whichever is higher. For large financial institutions, 3 per cent of global turnover represents a fine that dwarfs most regulatory penalties in financial services. The penalty structure is designed to make non-compliance economically irrational.
How AI changes this
The Act's requirements create specific technical and organisational obligations. Risk management (Article 9) requires a documented process for identifying and mitigating risks throughout the AI lifecycle. Data governance (Article 10) requires that training, validation, and testing datasets meet quality criteria including completeness, representativeness, and freedom from errors and bias. Technical documentation (Article 11) requires detailed descriptions of the system's design, development, and testing.
Record-keeping (Article 12) requires automatic logging of the system's operation to ensure traceability. Transparency (Article 13) requires that systems are accompanied by instructions for use that enable deployers to interpret outputs and use the system appropriately. Human oversight (Article 14) requires design features that enable effective human oversight, including the ability to understand, interpret, and override the system. Accuracy, robustness, and cybersecurity (Article 15) require that systems perform consistently and are resilient to errors and attacks.
The conformity assessment process requires providers of high-risk systems to demonstrate compliance before placing the system on the market. For financial services AI, this is primarily a self-assessment (not third-party certification), but the assessment must be documented and the documentation must be available to supervisory authorities. This creates a significant documentation burden for firms with large AI portfolios.
The Act also regulates general-purpose AI models (including large language models), requiring providers to maintain technical documentation, comply with EU copyright law, and publish summaries of training data. For financial institutions that build on foundation models, this means understanding the compliance obligations of the model provider and ensuring that the firm's own use of the model does not introduce non-compliant processing.
What to know before you start
Assess which of your AI systems are in scope. The Act applies to AI systems placed on the EU market or used to make decisions that affect individuals in the EU. A UK firm's credit model that is used only for UK customers is out of scope. The same model used for customers in Ireland or France is in scope. Map your AI systems to their geographic deployment to determine the compliance boundary.
The documentation requirements are extensive. The technical documentation required by Article 11 is more detailed than what most firms currently produce. It includes a general description of the system, the development process, the monitoring and control mechanisms, the testing and validation results, and the data governance practices applied. Building this documentation from scratch is a significant effort. Firms that have been maintaining good model risk management documentation under SS1/23 will have a head start.
Harmonised standards are still being developed. The Act references harmonised standards that will provide detailed technical specifications for compliance. These standards, being developed by CEN and CENELEC, are not yet finalised. Firms must begin compliance work based on the Act's requirements, with the expectation that specific technical details may be refined when the standards are published.
Start with a scoping exercise: identify all AI systems used in connection with EU operations, classify them under the Act's risk tiers, and assess the gap between current practices and the high-risk requirements. Prioritise the gap closure based on the August 2026 deadline, focusing on the areas that require the most structural change: risk management processes, data governance practices, documentation, and human oversight mechanisms. Build an implementation plan with clear milestones and assign accountability through your AI governance framework.
Last updated
Exploring AI for your organisation? There are fifteen minutes on the calendar.
Let’s build AI together