Customer Vulnerability Detection
Last reviewed April 2026
The FCA estimates that 47 per cent of UK adults show characteristics of vulnerability. That is not a niche segment. It is nearly half of every financial institution's customer base. Identifying and responding to customer vulnerability is a regulatory obligation under the Consumer Duty, and AI is the only way to do it at scale without relying on customers to self-identify.
What is customer vulnerability detection?
Customer vulnerability detection is the identification of customers who are at heightened risk of harm due to their personal circumstances. It uses predictive analytics and behavioural signals to flag customers who need additional support. The FCA's definition covers four drivers of vulnerability: health conditions (physical or mental), life events (bereavement, job loss, divorce), resilience (low financial capability, low emotional resilience), and capability (limited literacy, digital exclusion). Vulnerability is not static. A customer who is resilient today may become vulnerable after a redundancy, a health diagnosis, or a bereavement.
The challenge is detection. Customers rarely self-identify as vulnerable. Many do not recognise their own vulnerability. Others are reluctant to disclose personal circumstances to a financial institution. Traditional detection relies on frontline staff recognising vulnerability signals during interactions: hesitancy, confusion, distress, or specific disclosures. This works in face-to-face and telephone settings but fails in digital channels where there is no human to observe these signals.
The regulatory expectation is clear. The Consumer Duty requires firms to deliver good outcomes for all customers, including those in vulnerable circumstances. Firms must take extra care to ensure vulnerable customers receive outcomes at least as good as other customers. This requires identifying vulnerability before harm occurs, not after a complaint. Data governance policies must address the sensitive nature of vulnerability data with particular care.
The landscape
The FCA's Consumer Duty, effective from July 2023, elevated vulnerability from a "nice to have" to a core regulatory expectation. Firms must demonstrate that they have processes to identify, record, and respond to customer vulnerability across all channels and products. The FCA has published detailed guidance (FG21/1) on what it expects, including the use of data and technology to support identification.
The intersection of vulnerability detection and fraud detection is operationally critical. Authorised push payment (APP) fraud exploits vulnerability: scammers target individuals who are elderly, recently bereaved, or under financial stress. A vulnerability signal that is not shared with the fraud team is a missed opportunity to prevent harm. Equally, a fraud investigation that does not account for the customer's vulnerable circumstances risks compounding the harm.
Privacy regulation constrains what data can be collected and processed for vulnerability detection. The EU AI Act classifies AI systems that assess vulnerability for credit or insurance purposes as high-risk, adding transparency and governance requirements. Health data, which is directly relevant to vulnerability, is a special category under UK GDPR requiring explicit consent or a substantial public interest basis. Financial institutions must balance the duty to protect vulnerable customers against the duty to handle their personal data lawfully. The ICO's guidance on processing health data in a financial services context is the starting point for any data protection impact assessment.
How AI changes this
Natural language processing detects vulnerability signals in customer communications. Analysis of call transcripts, chat messages, emails, and complaint text identifies linguistic markers of distress, confusion, or coercion. Specific phrases ("I don't understand," "my husband used to handle this," "someone told me to call"), speech patterns (hesitation, repetition, distress), and communication style changes (a previously articulate customer becoming incoherent) are all signals that NLP can detect at scale across every interaction.
Behavioural analytics identifies vulnerability signals in transaction and account data. Sudden changes in spending patterns (cessation of regular payments, large unusual transactions, gambling spend), changes in channel usage (a digital-first customer suddenly calling repeatedly), and life event indicators (receipt of a bereavement payment, insurance payouts, salary cessation) are all detectable from data that the institution already holds. No customer disclosure is required.
Predictive models combine multiple signals to produce a vulnerability likelihood score. Rather than relying on any single indicator, the model assesses the combination of communication signals, behavioural changes, demographic factors, and product usage patterns. A customer who has recently lost a spouse (life event), whose spending pattern has changed (behavioural signal), and who is calling more frequently (channel change) has a higher vulnerability likelihood than any single signal alone would suggest.
Automated response routing ensures that identified vulnerability triggers appropriate action. A vulnerable customer calling about a complex product is routed to a specialist handler. A vulnerable customer entering arrears is prioritised for proactive outreach. A vulnerable customer targeted by a fraud attempt triggers enhanced fraud controls. The detection is only valuable if it connects to a response.
What to know before you start
Consent and data protection must be addressed first. Processing customer communications and behavioural data for vulnerability detection requires a clear lawful basis. Legitimate interest is the most likely basis, but the balancing test must account for the sensitivity of vulnerability data and the impact on customer trust. Conduct a Data Protection Impact Assessment before deployment. The ICO will expect one.
False positives in vulnerability detection carry their own risks. A customer incorrectly flagged as vulnerable may experience unwanted restrictions on their account, patronising communications, or reduced access to products. The system must be calibrated to minimise false positives, and the response to a vulnerability flag must be proportionate and respectful. Over-intervention is as harmful as under-detection.
Staff training must accompany technology deployment. An AI system that detects vulnerability and routes the customer to a handler who is not trained to respond appropriately has not improved outcomes. The technology identifies. The human responds. Both capabilities must be developed in parallel.
Start with claims handling and collections, where vulnerability is most likely to be present and where the consequences of failing to detect it are most severe. A vulnerable customer in arrears who receives standard collection correspondence may experience significant harm. A vulnerable customer making a claim who encounters delays or intrusive investigation may suffer disproportionately. These are the use cases where the regulatory and ethical case for detection is strongest, and where the FCA is most likely to assess your performance.
Last updated
Exploring AI for your organisation? There are fifteen minutes on the calendar.
Let’s build AI together