FCA AI Approach

Last reviewed April 2026

The FCA does not have AI rules. It has AI expectations, distributed across the Consumer Duty, the Principles for Businesses, the Senior Managers Regime, and a growing body of published thinking. For firms navigating this landscape, the question is not "what does the AI regulation say?" but "what would the FCA expect to see if it examined our AI systems tomorrow?" The FCA AI approach is pragmatic, outcomes-focused, and deliberately non-prescriptive, which makes it flexible but harder to comply with than a rule book.

What is the FCA AI approach?

The FCA regulates AI in financial services through its existing framework, applying its statutory objectives (consumer protection, market integrity, competition) and its Principles for Businesses to AI systems. The FCA's position, articulated in DP5/22 and the 2024 feedback statement, is that AI is a tool, not a new category of regulation. The rules that apply to a credit decision made by a human apply equally to a credit decision made by a model. What changes is how the firm evidences compliance: how it demonstrates that the AI decision was fair, transparent, and in the customer's interest.

The Consumer Duty is the regulation with the sharpest relevance to AI. It requires firms to deliver good outcomes for retail customers across four areas: products and services, price and value, consumer understanding, and consumer support. An AI system that prices unfairly, provides misleading information, makes opaque decisions, or denies customers reasonable support fails the Duty, regardless of its technical sophistication. The Duty's outcomes focus means the FCA assesses the results, not the technology.

The Senior Managers and Certification Regime (SM&CR) creates individual accountability for AI governance. A senior manager must be identified as responsible for the firm's use of AI, with sufficient understanding and authority to discharge that responsibility. The FCA tests this during supervisory interactions: can the responsible senior manager explain the firm's AI systems, the risks they carry, and the controls in place?

The landscape

The FCA's 2024 feedback statement on AI and machine learning (following DP5/22) confirmed several positions. First, the FCA sees benefits from AI adoption and does not intend to create rules that discourage it. Second, existing regulations are broadly sufficient to govern AI, but firms need to apply them more rigorously to automated systems. Third, the FCA intends to publish further guidance on specific topics, including explainability, bias testing, and governance structures for AI.

The FCA has signalled particular interest in three areas. First, fairness in AI-driven pricing: whether models that use behavioural data to optimise prices deliver fair value to all customer segments, including those with characteristics of vulnerability. Second, transparency of AI-driven decisions: whether customers understand when and how AI affects them. Third, governance of AI: whether firms have adequate oversight structures, including board-level understanding and senior management accountability.

The FCA participates in the Digital Regulation Cooperation Forum alongside the ICO, CMA, and Ofcom. This forum coordinates cross-regulatory approaches to AI, reducing the risk of contradictory expectations. For firms, this means FCA expectations will increasingly align with ICO data protection requirements and CMA competition considerations, though differences in regulatory mandates will persist.

How AI changes this

The Consumer Duty's outcomes monitoring obligation drives AI-specific compliance activities. Firms must monitor customer outcomes across demographic groups, identify where outcomes are poor, and take action. For AI-driven products and services, this means implementing fairness monitoring that tracks outcomes by protected characteristics and vulnerability indicators. The Duty does not prescribe how to do this. It requires that it is done.

The FCA's thematic reviews increasingly examine AI use. Past reviews have covered algorithmic trading, automated lending decisions, and AI in insurance pricing. These reviews generate published findings that signal supervisory expectations. Firms should treat thematic review findings as de facto guidance: even if the findings are directed at specific firms, the expectations they articulate apply to the sector.

Responsible AI practices align directly with FCA expectations. Fair treatment, transparent communication, robust governance, and effective customer support are both responsible AI principles and FCA requirements. Firms that build their AI programmes around responsible AI practices are simultaneously building FCA compliance. The alignment is not coincidental; the FCA's expectations are informed by the same principles.

The FCA's regulatory sandbox provides a structured environment for testing AI innovations with regulatory oversight. Firms can test AI-driven products and services in a controlled environment, with FCA engagement that helps identify compliance issues before full deployment. The sandbox is particularly valuable for novel AI applications where the regulatory treatment is uncertain.

What to know before you start

Map your AI systems to the Consumer Duty outcomes. For each AI system that affects retail customers, assess whether it supports or undermines each of the four outcomes: fair products, fair value, customer understanding, and customer support. This mapping identifies the AI systems that carry the highest conduct risk and prioritises them for enhanced governance.

Build your governance framework to be demonstrable. The FCA does not check a compliance box. It assesses whether governance is effective in practice: whether the board understands AI risks, whether monitoring catches issues, whether customers receive fair outcomes, and whether complaints are resolved appropriately. Evidence of effective governance, not just documented governance, is what the FCA expects to see.

Prepare for supervisory questions. The FCA's approach to AI supervision is evolving from thematic reviews to BAU supervision. This means supervisors will ask about AI during routine firm assessments. Prepare briefing materials that explain the firm's AI systems, their governance, and their outcomes in non-technical terms. The SM&CR responsible individual should be able to discuss these topics confidently.

Start with Consumer Duty compliance for your existing AI systems. Conduct an outcomes review: are your AI-driven products and services delivering good outcomes for all customer segments? Where they are not, identify the root cause (data bias, model design, governance gap) and remediate. This is the compliance activity that the FCA is most likely to examine and the one where deficiencies carry the most significant consequences.

Last updated

Exploring AI for your organisation? There are fifteen minutes on the calendar.

Let’s build AI together
← Back to AI Glossary