Identity Verification

Last reviewed April 2026

A financial institution rejected a legitimate customer because the photograph on their driving licence was ten years old and the facial recognition system could not match it to their selfie. Another institution approved a fraudulent account opened with a synthetic identity that had never belonged to a real person. Identity verification must solve both problems simultaneously: let the right people in and keep the wrong people out.

What is identity verification?

Identity verification is the process of confirming that a person is who they claim to be. In financial services, this is the first step of KYC and a prerequisite for opening an account, executing a transaction, or accessing a service. The process combines document verification (is this identity document genuine?), biometric verification (is the person presenting the document the same person pictured on it?), and data verification (does the information on the document match authoritative data sources?).

The threat landscape has shifted. A decade ago, identity fraud primarily involved stolen or forged physical documents. Today, synthetic identity fraud, where criminals construct entirely fictitious identities using combinations of real and fabricated information, accounts for a growing share of identity fraud in the UK. The UK's National Fraud Database recorded a 22 per cent increase in identity fraud cases in 2023, and synthetic identities are particularly difficult to detect because they are not linked to a real victim who might notice and report the fraud.

The verification chain has multiple failure points. The document may be genuine but stolen. The biometric may match but be spoofed using a deepfake. The data may verify against a source, but that source may itself contain fabricated records. Effective identity verification layers multiple checks so that a failure at one point is caught by another. No single verification method is sufficient on its own.

The landscape

The UK's digital identity ecosystem is evolving. The UK Digital Identity and Attributes Trust Framework, maintained by the Department for Science, Innovation and Technology, sets standards for identity service providers. The trust framework defines levels of confidence and the evidence required to achieve each level. Financial services typically require the highest confidence level, which demands verification against authoritative government sources.

The EU's eIDAS 2.0 regulation will require member states to offer citizens a digital identity wallet by 2026. Financial institutions will be obliged to accept these wallets for customer verification. This shifts the verification model from institutions checking documents to institutions verifying credentials issued by governments. The infrastructure for accepting these wallets is not yet widely deployed in financial services.

Deepfake technology has become a direct threat to biometric verification. Commercially available tools can generate realistic video of a person's face from a single photograph. Presentation attack detection, the technology that distinguishes a live person from a photograph, video, or deepfake, is in an arms race with the tools used to defeat it. The certification standard for liveness detection, ISO 30107-3, is the baseline that financial institutions should require from their verification providers.

How AI changes this

Document authenticity verification uses computer vision to detect forgery at the micro-level: font consistency, security feature presence, micro-printing, and hologram patterns that are invisible to the human eye in a digital image. AI models trained on thousands of genuine and forged documents for each document type achieve detection rates above 95 per cent for known forgery techniques. The limitation is novel forgery methods that the model has not been trained on, which is why model retraining on emerging fraud typologies is a continuous requirement.

Biometric matching has matured significantly. Modern AI models achieve false match rates below 0.01 per cent while maintaining false non-match rates below 1 per cent, meaning they rarely confuse two different people while also rarely rejecting the genuine document holder. These performance levels hold across age gaps between the document photograph and the live image of up to ten years, and across demographic groups when trained on diverse datasets. The accuracy improvement feeds directly into customer due diligence workflows, where a failed biometric check previously required manual intervention.

Synthetic identity detection is the emerging frontier. Because synthetic identities are constructed rather than stolen, they do not trigger traditional fraud alerts. AI models detect them by identifying inconsistencies that a synthetic identity cannot avoid: a credit history that is too thin for the claimed age, an address history that does not correlate with known residential patterns, or data elements that verify individually but are inconsistent when cross-referenced. This requires integration between identity verification and fraud detection systems.

Reusable verified identities reduce the need for repeated verification. Once a customer's identity has been verified to a high confidence level, that verification can be stored and reused for subsequent interactions, with periodic re-verification triggered by risk events. AI determines when re-verification is needed based on the customer's risk profile and behaviour, rather than applying a fixed schedule. This reduces friction for the customer and cost for the institution.

What to know before you start

Bias in biometric systems is a real and measurable problem. Academic studies and NIST testing have documented higher false non-match rates for certain demographic groups in some commercial systems. This means legitimate customers from those groups are more likely to be rejected. Test your provider's system against your actual customer demographic, not against their published benchmarks. Publish your bias testing results internally and include them in your model risk governance.

Liveness detection is not optional. Without it, your biometric verification is vulnerable to presentation attacks using printed photographs, video playback, or deepfakes. Require ISO 30107-3 certification from your provider and conduct your own testing with current-generation attack methods. The certification tests for specific attack types, and new attack methods emerge faster than the standard updates.

The customer experience is a design constraint, not an afterthought. Every additional verification step increases abandonment. An eKYC process that takes more than five minutes to complete loses a measurable percentage of applicants at each step. Design the verification flow to request only what is necessary for the customer's risk tier, and optimise the user interface for speed and clarity.

Start with your highest-fraud-loss channel. If most identity fraud occurs during digital onboarding, invest there first. If it occurs during account recovery or authentication, prioritise that flow. Measure the fraud rate before and after deployment to quantify the impact. Identity verification improvements compound across the customer lifecycle: a stronger identity check at onboarding reduces fraud across all subsequent interactions.

Last updated

Exploring AI for your organisation? There are fifteen minutes on the calendar.

Let’s build AI together
← Back to AI Glossary