Contact Centre AI

Last reviewed April 2026

A large UK bank fields over 100 million customer calls a year. Most callers navigate three or four menu options before reaching a human who asks them to repeat everything they just typed. Contact centre AI promises to collapse that loop, but the gap between a chatbot that deflects calls and a system that genuinely resolves them is wider than most vendors acknowledge. The integration with workflow automation and core banking systems is where the real complexity lives.

What is contact centre AI?

Contact centre AI is the use of artificial intelligence across inbound and outbound customer communication channels: voice, chat, email, and messaging. It covers three distinct capabilities. First, virtual agents that handle customer queries without human involvement. Second, agent-assist tools that listen to live calls and surface relevant information for the human handler. Third, analytics layers that extract insight from every interaction to improve service quality and identify systemic issues.

The distinction matters because financial services contact centres have different constraints from retail or telecoms. Identity verification must happen before any account-level conversation. Vulnerable customer identification is a regulatory obligation under the FCA's Consumer Duty. And many queries involve regulated advice boundaries that a virtual agent cannot cross without compliance controls. A system that works brilliantly for parcel tracking will fail in a regulated environment without significant adaptation.

The cost arithmetic is compelling. A human-handled call costs a UK bank between 4 and 8 pounds. An AI-resolved interaction costs under 50 pence. For an institution handling millions of calls annually, even modest deflection rates translate to tens of millions in savings. But deflection is the wrong metric. Resolution rate, the proportion of AI-handled interactions that the customer does not need to follow up on, is what separates cost reduction from cost displacement.

The landscape

The FCA's Consumer Duty, effective from July 2023, reshaped the requirements for customer-facing AI in financial services. Firms must demonstrate that their communication channels deliver good outcomes, including for vulnerable customers. This means contact centre AI must detect vulnerability signals (speech patterns, distress indicators, cognitive difficulty) and escalate to a human handler when appropriate. A system optimised purely for containment rate will fail this test.

Large language models have raised customer expectations for conversational quality. Customers who interact with ChatGPT expect a bank's virtual assistant to understand natural language, remember context within a conversation, and handle multi-turn queries. The legacy IVR systems and keyword-matching chatbots that most banks deployed between 2018 and 2022 feel dated by comparison. Firms that have adopted the EU AI Act's transparency requirements early are finding that disclosure of AI interaction does not reduce customer engagement, provided the experience is good.

Omnichannel orchestration is where most implementations stall. A customer who starts a query on the app, continues via chat, and finishes on a phone call expects the context to follow them. Most banks have separate technology stacks for each channel, with separate vendors, separate data models, and separate conversation histories. AI that works within a single channel but loses context across channels creates a worse experience than no AI at all.

How AI changes this

Real-time speech analytics is production-ready and widely deployed in UK banking. These systems transcribe calls as they happen, identify the caller's intent, and surface relevant knowledge base articles, policy details, or account information on the agent's screen. The handler spends less time searching and more time solving. Average handle time reductions of 15 to 25 per cent are typical.

Intent-based routing replaces rigid IVR menus with natural language understanding. The customer states their problem in their own words, and the system routes them to the right team or virtual agent based on the detected intent. This eliminates the "press 1 for..." loop and reduces misrouted calls, which are among the most expensive interactions because they require transfers and repeated explanations.

Post-call analytics extracts structured data from every interaction: reason for contact, sentiment, resolution status, compliance flags. This feeds into complaint analytics and operational planning. Instead of relying on agents to manually categorise calls, the AI categorises every interaction consistently, revealing patterns that manual sampling misses.

Generative AI for response drafting is emerging but requires careful governance. LLMs can draft email and chat responses that the agent reviews before sending. The risk is hallucination: a model that invents a policy term or misquotes an interest rate creates regulatory exposure. Production deployments gate generative responses through retrieval-augmented generation, pulling answers exclusively from approved knowledge bases rather than the model's training data.

What to know before you start

Your knowledge base is the foundation, not the AI model. A virtual agent that retrieves answers from an outdated, inconsistent, or incomplete knowledge base will give outdated, inconsistent, or incomplete answers. Audit and restructure your knowledge content before deploying any AI layer on top of it. This is the work that takes months, and it is invisible to the board sponsor who wants a chatbot live by Q3.

Identity verification in the AI channel is a solved problem but a compliance-sensitive one. Voice biometrics, device recognition, and stepped authentication all work. The question is whether your compliance team has approved the specific approach for the specific channel. Get sign-off on the authentication model before building the conversational flow.

Measure resolution rate, not deflection rate. A high deflection rate with a high callback rate means you have shifted cost from one channel to another while annoying the customer. Track whether the customer's issue was actually resolved in the AI interaction, and feed failures back into the system to improve it. The best contact centre AI deployments run continuous feedback loops where failed interactions train the next version of the model.

Start with agent-assist, not full automation. Putting AI alongside the human agent delivers immediate value (faster handle times, better consistency) with lower risk than replacing the human entirely. It also generates the training data you need to build accurate virtual agents later. Build Consumer Duty analytics into the platform from the outset so you can demonstrate that the AI channel delivers outcomes at least as good as the human channel. The institutions that jumped straight to full automation without this data foundation are the ones now unwinding their deployments.

Last updated

Exploring AI for your organisation? There are fifteen minutes on the calendar.

Let’s build AI together
← Back to AI Glossary