Artificial Intelligence (AI)

Last reviewed April 2026

The financial services industry spends billions on artificial intelligence (AI) each year, yet most institutions struggle to name five production systems that use it. Why does the gap between AI investment and AI deployment remain so wide, and what separates the firms that ship from those that pilot indefinitely?

What is AI?

Artificial intelligence is the broad field of building systems that perform tasks normally requiring human judgement. In financial services, this covers everything from rules-based automation to machine learning models that learn from data, to large language models that process and generate text. The term is used so broadly that it has become nearly meaningless in vendor conversations. A spreadsheet macro and a fraud detection neural network are both sold as "AI" depending on the sales deck.

What matters in practice is the distinction between narrow AI and general AI. Every AI system in production in financial services today is narrow: it does one thing well. A credit scoring model scores credit. A document extraction system reads invoices. A chatbot answers customer queries within a defined scope. General AI, a system that reasons across domains the way a human does, does not exist in any production financial services environment. Vendors who imply otherwise are selling futures.

The operational reality is that AI in financial services is a collection of specific techniques applied to specific problems. Natural language processing reads documents. Computer vision assesses damage photographs. Predictive models forecast defaults. Each technique has its own data requirements, validation process, and failure modes. Treating AI as a single capability rather than a toolkit of distinct methods is the first mistake most institutions make.

The landscape

The EU AI Act is the most significant regulatory development in AI governance globally. It classifies AI systems by risk tier, with credit scoring, fraud detection, and biometric identification categorised as high risk. High-risk systems must meet requirements for transparency, data quality, human oversight, and documentation. The first compliance deadlines began in 2025, with full application by August 2026.

The UK has taken a different path. Rather than sector-wide AI legislation, the UK government has tasked existing regulators (the FCA, PRA, ICO) with applying AI principles within their existing mandates. The result is a patchwork that offers flexibility but less certainty. Financial institutions operating across both jurisdictions must satisfy both frameworks, which means building to the higher standard in practice.

Foundation models and large language models have shifted the conversation from "should we use AI" to "how do we govern AI that our staff are already using." Shadow AI, employees using ChatGPT or similar tools with client data, is a live risk in most institutions. The governance challenge has outpaced the deployment challenge. Firms that lack an AI audit trail and clear usage policies are accumulating regulatory risk whether they have an official AI programme or not.

How AI changes this

The most mature AI applications in financial services are in fraud detection, credit decisioning, and document processing. These have been in production for years, with well-understood performance characteristics and regulatory expectations. They are not experimental. They are infrastructure.

Large language models are creating a second wave of applications in areas that were previously resistant to automation: summarising regulatory documents, drafting SAR narratives, extracting data from unstructured broker submissions, and answering customer queries. The difference from the first wave is speed of deployment. A fraud detection model takes months to build and validate. An LLM-based document summariser can reach pilot in weeks. This speed creates governance risk: the deployment outpaces the validation.

The shift from point solutions to platform thinking is where the most advanced institutions are heading. Rather than building separate AI systems for each use case, they invest in shared infrastructure: common data pipelines, model registries, monitoring frameworks, and governance processes. This reduces the marginal cost of each new AI application and ensures consistent standards across the portfolio. MLOps is the discipline that makes this operational.

What to know before you start

Start with the problem, not the technology. The question is not "where can we use AI?" but "which operational problem costs us the most, and would data-driven automation reduce that cost?" The firms that succeed with AI are those that select use cases based on business value and data readiness, not on technological ambition.

Data readiness is the single biggest predictor of AI programme success. If your customer data is fragmented across systems, your transaction data is inconsistent, and your documents are not digitised, no model will compensate. Assess data quality for your target use case before writing a line of code.

Governance before deployment, not after. Define who approves AI models, how they are validated, how they are monitored, and how decisions are explained. The EU AI Act requires this for high-risk systems. The FCA expects it as good practice for all AI systems. Building governance retroactively is more expensive and less effective than building it first.

Budget for operations, not just development. Building a model is 20 per cent of the total cost. Running it in production, monitoring its performance, retraining it when the data shifts, and maintaining the governance documentation is the other 80 per cent. Most failed AI programmes were funded as projects when they needed to be funded as capabilities.

Last updated

Exploring AI for your organisation? There are fifteen minutes on the calendar.

Let’s build AI together
← Back to AI Glossary