UK AI Regulation

Last reviewed April 2026

The UK does not have an AI Act. It does not plan to have one. Instead, it has directed existing regulators to apply AI principles within their sectors, which means the FCA, PRA, ICO, CMA, and Ofcom each interpret and enforce AI rules independently. For financial services firms, this is both a freedom and a complexity: no single rule book to follow, but no single rule book to rely on either. Understanding UK AI regulation requires understanding how multiple regulators' expectations overlap, reinforce, and occasionally conflict.

What is UK AI regulation?

UK AI regulation is the framework of sector-specific rules, guidance, and supervisory expectations that govern AI use across the economy. The government's approach, set out in its 2023 white paper "A pro-innovation approach to AI regulation" and reinforced in subsequent policy statements, is principles-based and regulator-led. Five cross-cutting principles guide all regulators: safety, security, and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.

No new statutory regulator has been created. Instead, existing regulators, the FCA, PRA, ICO, Competition and Markets Authority (CMA), and others, are expected to interpret these principles within their mandates and existing powers. For financial services, this means AI governance is primarily regulated through the FCA's conduct framework, the PRA's prudential framework, and the ICO's data protection framework, with the Senior Managers and Certification Regime providing individual accountability.

The UK's approach is deliberately distinct from the EU's. Where the EU AI Act creates a horizontal, prescriptive framework with specific obligations for high-risk AI systems, the UK relies on existing regulators to develop sector-appropriate guidance. The government argues this approach is more flexible and innovation-friendly. Critics argue it creates uncertainty because firms must navigate multiple regulators' evolving expectations rather than a single set of rules.

The landscape

The FCA and PRA jointly published a discussion paper (DP5/22) on AI and machine learning in financial services in 2022, followed by a feedback statement in 2024. The paper identifies benefits and risks of AI, explores how existing regulatory frameworks apply, and signals areas where further guidance may be needed. The regulators concluded that their existing frameworks are broadly sufficient but that specific expectations for AI may need to be clarified.

The PRA's SS1/23 on model risk management is the most concrete regulatory instrument for AI in UK financial services. While not AI-specific, it applies to all models including machine learning systems. Its requirements for model inventory, validation, monitoring, and governance create a structured framework for AI governance that firms must comply with.

The ICO's AI and data protection guidance, regularly updated, provides detailed advice on compliance with UK GDPR for AI systems. Topics include lawful basis for AI processing, data protection impact assessments, automated decision-making rights, and fairness in AI. The ICO has also published specific guidance on generative AI, addressing the novel data protection challenges these systems present.

How AI changes this

The regulatory landscape is evolving. The government established the AI Safety Institute (now the AI Security Institute) to conduct technical research on AI risks, though its mandate is broader than financial services. The Digital Regulation Cooperation Forum brings together the FCA, ICO, CMA, and Ofcom to coordinate AI regulation across sectors, reducing the risk of contradictory guidance.

Regulatory sandboxes and innovation initiatives provide pathways for firms to test AI applications with regulatory oversight. The FCA's regulatory sandbox and the PRA's new firm authorisation process both accommodate AI-native business models. These initiatives signal regulatory openness to AI innovation, provided firms can demonstrate appropriate governance and risk management.

The practical effect for financial services firms is that AI governance must satisfy multiple regulators simultaneously. A credit scoring model must meet the PRA's model risk requirements, the FCA's conduct requirements (including Consumer Duty), the ICO's data protection requirements, and the Equality Act's non-discrimination requirements. There is no single compliance certificate. Firms must map their controls framework to each regulator's expectations and demonstrate compliance to each.

The extraterritorial reach of the EU AI Act means that UK firms with EU operations must also comply with EU requirements. This creates a dual compliance burden that firms are managing by building governance frameworks that satisfy both regimes, typically by meeting the more prescriptive EU requirements and demonstrating to UK regulators that these controls are proportionate.

What to know before you start

Map your AI systems to the relevant regulatory requirements. For each AI system, identify which regulators' expectations apply: PRA (if it is a model under SS1/23), FCA (if it affects consumer outcomes), ICO (if it processes personal data), and equality law (if it makes decisions about individuals). The mapping reveals the total compliance obligation and ensures nothing falls through the gaps between regulators.

Engage your regulators proactively. Both the FCA and PRA have signalled openness to dialogue about AI approaches. A conversation with your supervisor about your AI governance framework, before a supervisory visit, demonstrates maturity and provides early warning of any misalignment with expectations. The relationship-based nature of UK financial regulation makes this proactive engagement particularly valuable.

Monitor the regulatory trajectory. The UK approach is still developing. The government has indicated that statutory duties may be placed on regulators to have "due regard" to the AI principles, which would strengthen the framework's enforceability. Further FCA and PRA guidance on AI is expected. Build your governance framework to be adaptable, with controls that can be strengthened as regulatory expectations crystallise.

Start with SS1/23 compliance as the foundation. The PRA's model risk management requirements are the most concrete and most immediately enforceable regulatory standard for AI in UK financial services. Firms that build their AI governance on the SS1/23 framework and extend it to address FCA conduct requirements, ICO data protection requirements, and equality obligations will have a robust, defensible approach regardless of how UK AI regulation evolves.

Last updated

Exploring AI for your organisation? There are fifteen minutes on the calendar.

Let’s build AI together
← Back to AI Glossary