Fraud Detection

Last reviewed April 2026

The dominant type of financial fraud in the UK is no longer a stolen credit card or a hacked account. It is authorised push payment fraud, where the customer themselves initiates the transfer, deceived by a scammer they believe is legitimate. How do you detect fraud when the customer is the one pressing the button?

What is fraud detection?

Fraud detection in financial services is the identification and prevention of dishonest financial activity, whether that is payment fraud, identity theft, account takeover, insurance fraud, or market manipulation. It sits at the intersection of technology, operations, and regulation: a fraud system must catch criminal activity in real time, minimise disruption to legitimate customers, and satisfy the regulator that controls are proportionate and effective.

The landscape has shifted fundamentally. Traditional fraud, stolen credentials, counterfeit cards, has been suppressed by strong customer authentication (SCA) under PSD2. What has replaced it is harder to detect. Authorised push payment (APP) fraud, where the victim is socially engineered into making a payment they believe is legitimate, accounted for over 450 million pounds in losses in the UK in 2023. The customer authenticates genuinely. The payment passes all technical controls. The fraud lies in the deception, not the transaction.

Real-time payment schemes compress the detection window. Under Faster Payments, a transfer settles in seconds. The time available to intervene, between the customer initiating the payment and the funds leaving the account, is measured in milliseconds. A fraud detection system that analyses transactions in batch overnight is not fit for purpose in a real-time payments world.

The landscape

The UK's mandatory reimbursement regime for APP fraud, effective from October 2024, changed the economics. Sending and receiving payment service providers now share liability for APP fraud losses. This means banks are financially incentivised not just to detect fraud in outgoing payments but to identify accounts receiving fraudulent funds, the mule accounts that have historically been a blind spot.

Cross-institutional data sharing frameworks are emerging in response. Confirmation of Payee, which verifies that the recipient's name matches the account, is now mandatory. But name-matching is a blunt instrument against sophisticated scammers who use accounts in the victim's own name or in names similar enough to pass the check. More promising are initiatives to share fraud signals between institutions in real time, though privacy and competition concerns constrain their scope.

Insurance fraud operates on a different timescale but similar principles. Claims fraud, whether exaggerated, staged, or entirely fabricated, costs UK insurers an estimated 1.2 billion pounds annually. Detection typically happens during claims processing, but the most effective interventions happen earlier, at the point of application or policy inception, where patterns visible to AI may not be apparent to a human reviewer.

How AI changes this

Behavioural biometrics is the most significant recent advance in APP fraud detection. Rather than analysing the transaction itself, these systems analyse how the customer interacts with their device: typing speed, scroll patterns, hesitation before confirming, and whether the session behaviour matches the customer's established pattern. A customer being coached through a payment by a scammer on the phone behaves differently from one making a routine transfer. This is production-ready and deployed by several UK banks.

Graph analytics maps relationships between accounts, identifying mule networks that receive and redistribute fraudulent funds. A mule account may not look suspicious in isolation, but when viewed as part of a network that receives small amounts from many compromised accounts and rapidly moves funds onward, the pattern is distinctive. This capability is essential for meeting the receiving-bank obligations under the APP fraud reimbursement regime.

Real-time scoring models assess each payment against the customer's behavioural baseline, the recipient's risk profile, and the broader context. The challenge is calibration: too sensitive and the system blocks legitimate payments, damaging the customer experience; too lenient and fraud passes through. Machine learning models continuously recalibrate based on confirmed fraud cases and false positive feedback, improving over time in a way that static rules cannot.

The connection to anti-money laundering and KYC is increasingly operational. Fraud, money laundering, and identity crime share infrastructure, data sources, and often the same criminal networks. Institutions that integrate their financial crime platforms, sharing intelligence between fraud, AML, and KYC functions, detect more and spend less on investigation than those that operate these functions in silos.

What to know before you start

Your fraud detection system's performance is measured in milliseconds and in pennies. Every millisecond of latency in a real-time payment decision is a millisecond where a legitimate customer waits. Every false positive is a blocked payment and a frustrated customer. Every false negative is a loss. Optimise for the balance, not for any single metric. The best fraud systems target a false positive rate below 1 in 10,000 for low-risk transactions while maintaining detection rates above 90 per cent for known fraud typologies.

APP fraud detection requires a fundamentally different approach than traditional transaction fraud. You are not looking for an anomalous transaction; you are looking for an anomalous interaction. The investment in behavioural analytics, session-level data capture, and device intelligence is separate from and additional to your transaction monitoring investment. Budget accordingly.

Data sharing with other institutions is valuable but operationally complex. The legal framework exists under the UK's data protection exemptions for fraud prevention, and industry bodies like Cifas provide structured sharing mechanisms. But the technical integration, data quality requirements, and governance processes add meaningful cost. Start with Confirmation of Payee compliance and the established fraud data sharing schemes before attempting bespoke bilateral sharing.

Model retraining cadence matters more than initial model accuracy. Fraud patterns shift rapidly. A model trained on last year's fraud typology will miss this year's scam variant. Build automated monitoring that detects model performance degradation and triggers retraining before losses accumulate. Monthly retraining is a common cadence; some institutions retrain weekly for their highest-risk payment channels. Ensure your fraud outputs feed into regulatory reporting workflows so that SAR filings and fraud statistics remain consistent and timely.

Last updated

Exploring AI for your organisation? There are fifteen minutes on the calendar.

Let’s build AI together
← Back to AI Glossary