Banking
Enterprise AI for Banking
KYC backlogs, AML false positives, slow credit decisions, and manual regulatory returns. These are the four problems where AI is already earning its place inside UK banks. Not in theory. In production, under regulatory scrutiny, with real customers.
Onboarding
KYC and customer onboarding
A typical UK bank reviews between 5,000 and 50,000 KYC cases per month. Each case requires identity verification, sanctions screening, adverse media checks, and a risk assessment. Most of that work is manual. An analyst opens the case, checks the documents, queries three or four databases, writes a summary, and assigns a risk rating. Forty minutes per case. Eighty per cent of those cases are straightforward.
AI changes the economics of that queue. Document extraction pulls structured data from passports, utility bills, and corporate filings. Entity resolution matches customer records across internal systems and external databases. A scoring model triages each case by risk: low, medium, high. The low-risk cases (typically 60-70 per cent of the queue) pass through with minimal human review. Analysts focus on the cases that actually need judgement.
The regulatory constraint is clear. The FCA requires firms to apply a risk-based approach to customer due diligence under the Money Laundering Regulations 2017. AI does not remove the obligation. It makes the risk-based approach more consistent. A model applies the same criteria to every case. A team of forty analysts, working under pressure, does not.
The hard part is not the model. It is the data. KYC data lives in core banking systems, CRM platforms, document stores, and third-party screening providers. Building a pipeline that pulls from all of these with consistent latency and full audit trail is the real engineering challenge. Start there. The model is the easy part.
Financial crime
AML and financial crime
Anti-money laundering operations in UK banks are drowning in false positives. Transaction monitoring systems generate alerts. Analysts investigate them. Over 95 per cent turn out to be nothing. That is not a technology failure. It is a design problem. Rule-based systems fire on patterns, not on intent. They cannot distinguish a cash-intensive restaurant from a shell company.
AI-based transaction monitoring changes the signal-to-noise ratio. Machine learning models trained on historical SARs (Suspicious Activity Reports) and confirmed fraud cases learn to weight alerts by genuine risk. Banks deploying these models report false positive reductions of 60 to 80 per cent. That is not an efficiency gain. It is a fundamental change in how financial crime teams spend their time.
The PRA's SS1/23 applies directly here. Any model that influences whether an alert is investigated or dismissed is a material risk model. It needs full model risk management: validation, challenger models, ongoing monitoring for drift, and clear documentation of the model's boundaries. The regulator does not object to AI in AML. They object to AI without governance.
Network analysis is the second frontier. Traditional AML looks at individual transactions. AI can map relationships between accounts, entities, and counterparties to identify coordinated activity. A single transaction looks clean. A network of 200 transactions across 15 accounts, all linked to the same beneficial owner, looks very different. This is work that humans cannot do at scale. It is where AI earns its place in financial crime detection.
Lending
Credit decisioning
Credit scoring in UK banking still relies heavily on bureau data and static scorecards. These work well for prime borrowers with long credit histories. They work badly for thin-file customers: recent graduates, migrants, gig economy workers, and small businesses with limited trading history. This is not just a fairness problem. It is a commercial one. Banks are declining profitable lending because the scorecard cannot see the signal.
AI-based credit models can ingest alternative data sources: Open Banking transaction data (under PSD2), rental payment histories, income verification through API, and behavioural patterns from account usage. These signals, combined with traditional bureau data, produce a more complete picture of creditworthiness. Banks using these models report approval rate increases of 15 to 25 per cent in thin-file segments with no increase in default rates.
Explainability is the regulatory constraint. The FCA's Consumer Duty requires firms to demonstrate that lending decisions are fair and that customers can understand why they were declined. A gradient-boosted tree that outputs a score is not enough. You need feature importance, adverse action reasons, and a clear mapping from model output to decision. Build explainability into the model architecture, not as a wrapper after the fact.
Speed matters too. A consumer expecting an instant decision on a personal loan will not wait three days for manual underwriting. AI models that score in milliseconds and refer only the edge cases to human review let banks compete on speed without compromising on risk. The data infrastructure must support real-time feature serving. Batch scoring is not fast enough for modern origination flows.
Compliance
Regulatory reporting
UK banks submit hundreds of regulatory returns each year to the PRA, FCA, and Bank of England. Capital adequacy, liquidity coverage, large exposures, remuneration, conduct data. Each return requires data extraction from multiple source systems, transformation into the regulator's format, validation, and sign-off. The cycle repeats quarterly, monthly, or daily depending on the return.
Most banks run this process on spreadsheets, legacy ETL pipelines, and manual reconciliation. The reporting team spends 70 per cent of its time on data extraction and transformation. The remaining 30 per cent goes to the work that actually requires expertise: interpreting the rules, making judgement calls on boundary cases, and explaining the numbers to senior management.
AI automates the 70 per cent. Natural language processing extracts data points from unstructured documents (board minutes, committee papers, policy documents). Machine learning maps source data to reporting taxonomies. Validation models flag anomalies and inconsistencies before the return reaches the reviewer. The reporting team shifts from data wranglers to report reviewers.
The risk here is accuracy. A regulatory return with errors is not just embarrassing. It can trigger a Section 166 review or a supervisory intervention. AI models used in regulatory reporting need conservative thresholds, human review on every exception, and full audit trails from source data to submitted figure. This is not a use case for autonomous AI. It is a use case for AI that does the mechanical work and presents the results for expert review. The enterprise AI guide covers the governance framework in detail.
Last updated
We have built AI systems inside UK banks. If you are working on any of these problems, there are fifteen minutes on the calendar.
Let’s build AI together