Consumer Duty

Last reviewed April 2026

The FCA's Consumer Duty does not mention AI once. It does not need to. It requires firms to deliver good outcomes for retail customers, and if an AI system produces bad outcomes, the firm is accountable regardless of whether a human or a machine made the decision. For financial services firms deploying AI, the Consumer Duty is the regulatory instrument that connects model governance to customer impact, and it is the one the FCA will use when AI harms consumers.

What is the Consumer Duty?

The Consumer Duty, effective from July 2023 for new and existing products, establishes a higher standard of consumer protection across retail financial services. It requires firms to act in good faith, avoid causing foreseeable harm, and enable customers to pursue their financial objectives. These principles are operationalised through four outcomes: products and services (designed to meet customers' needs), price and value (fair relationship between price and benefits), consumer understanding (clear communication), and consumer support (accessible and responsive service).

For AI systems, the Duty creates a direct link between model outputs and regulatory compliance. A credit scoring model that declines applicants who would actually repay is producing a bad product outcome. A pricing model that charges vulnerable customers more because they are less likely to switch is producing a bad value outcome. A claims triage model that routes complex cases to automated processing without adequate explanation is producing a bad understanding outcome. Each of these is a Consumer Duty failure, actionable by the FCA.

The Duty is outcomes-focused, not process-focused. Having a responsible AI framework is necessary but not sufficient. The FCA assesses whether customers actually receive good outcomes, not whether the firm has documented its intention to deliver them. This means firms must monitor outcomes, identify where they are poor, and take action. For AI-driven decisions, this requires fairness monitoring, outcomes tracking, and the operational capability to intervene when outcomes deteriorate.

The landscape

The FCA published its first annual report on the Consumer Duty's implementation in 2024, identifying areas of concern across the industry. While AI was not a standalone topic, several of the identified issues, pricing practices, vulnerability identification, and complaints handling, directly implicate AI systems. The FCA's thematic work on AI in pricing, signalled in its 2024 feedback statement, will examine how AI pricing models interact with the Duty's value outcome.

The vulnerability dimension is where the Duty's interaction with AI is most acute. The FCA defines vulnerability broadly: health conditions, life events, financial resilience, and capability constraints. AI systems that do not account for vulnerability risk producing outcomes that disproportionately harm those least able to absorb it. A fraud detection system that freezes accounts without considering that the customer may be a victim of economic abuse is an example of an AI system that fails the vulnerability test.

The Duty's annual outcomes review requirement forces firms to assess their entire product and service portfolio, including AI-driven elements, against the four outcomes on a regular basis. This is not a compliance exercise that can be completed once and filed. It is an ongoing operational obligation that requires data, analysis, and action. For firms with significant AI deployment, the outcomes review is a substantial undertaking that requires integration between AI monitoring systems and business outcomes data.

How AI changes this

Outcomes monitoring platforms track the end-to-end customer journey through AI-driven processes, connecting model outputs to customer outcomes. Did the customer who was offered a particular product actually benefit from it? Did the customer who was declined for credit go on to experience financial difficulty (suggesting the decline was appropriate) or thrive (suggesting the decline was a missed opportunity and a bad outcome)? These outcome-level questions require data that spans the full customer lifecycle, not just the point of decision.

Fairness testing against vulnerability indicators extends traditional protected characteristic analysis. The Duty requires good outcomes for all customers, including vulnerable ones. This means testing AI outputs not just across demographic groups but across vulnerability dimensions: customers with low financial resilience, customers experiencing difficult life events, and customers with limited capability to engage with financial products. These dimensions are harder to measure than protected characteristics but equally important under the Duty.

Value assessment for AI-driven pricing requires demonstrating that the price charged to each customer segment reflects the value received. An AI model that optimises pricing based on price sensitivity (willingness to pay) rather than risk alone may produce prices that the FCA considers poor value for price-insensitive customers. The Duty requires firms to assess whether this pricing approach is consistent with fair value, and to justify any differential pricing that is not based on risk or cost.

Transparency in AI-driven communications ensures that customers understand the decisions being made about them. The consumer understanding outcome requires that communications are clear, fair, and not misleading. For AI-driven decisions, this means explaining the basis for the decision in language the customer can understand, providing clear information about how to challenge the decision, and ensuring that automated communications maintain the same quality as human ones.

What to know before you start

The Consumer Duty applies to outcomes, not intentions. A firm that deploys a well-governed, thoroughly tested AI model that nonetheless produces poor outcomes for a customer segment is in breach. Governance and testing reduce the likelihood of poor outcomes, but they do not provide a safe harbour. Monitor actual outcomes continuously and be prepared to intervene, including disabling a model, when outcomes deteriorate.

Map every AI-driven customer touchpoint to the four outcomes. For each touchpoint, assess: does the AI support a good product outcome? A good value outcome? A good understanding outcome? A good support outcome? Where the answer is uncertain, investigate. The mapping exercise reveals where AI creates the most significant Duty risk and prioritises remediation.

Vulnerability must be designed into AI systems, not bolted on. If your credit model, pricing model, or claims model does not account for vulnerability characteristics, it cannot deliver good outcomes for vulnerable customers. Consider incorporating vulnerability indicators into model design, implementing vulnerability-sensitive decision thresholds, and ensuring that automated processes include pathways for human intervention when vulnerability is detected.

Start with an outcomes review of your existing AI-driven products and services. Use the FCA's outcomes framework to assess whether customers are receiving good outcomes across all four dimensions. Where outcomes are poor, trace the cause to the AI system, the process, or the policy. Remediate the root cause, not just the symptom. The annual outcomes review is both a compliance obligation and the best diagnostic tool for AI governance under the Consumer Duty.

Last updated

Exploring AI for your organisation? There are fifteen minutes on the calendar.

Let’s build AI together
← Back to AI Glossary