Generative AI

Last reviewed April 2026

Every insurer, bank, and asset manager is being asked the same question by their board: what is our generative AI strategy? The technology can draft regulatory responses, summarise customer files, and generate synthetic test data. But between the board presentation and a production deployment sit questions about accuracy, governance, and cost that most organisations have not yet answered.

What is generative AI?

Generative AI refers to systems that create new content (text, images, code, data) rather than classifying or predicting from existing data. The distinction matters because it changes the risk profile. A traditional ML model that scores a transaction as suspicious produces a number. A generative model that drafts a suspicious activity report produces prose that a human might send to the regulator. The output looks authoritative whether or not it is accurate.

Large language models are the most prominent form of generative AI in financial services, but the category is broader. It includes image generation (useful for synthetic document creation in testing), code generation (accelerating development teams), and structured data synthesis (generating realistic but fictitious customer records for model training without privacy risk).

The generative label distinguishes these systems from the predictive analytics and classification models that financial services has used for decades. Credit scoring models predict. Fraud detection models classify. Generative models produce. This distinction shapes how you validate, govern, and deploy them.

The landscape

The EU AI Act treats generative AI through its general-purpose AI model provisions. Providers must publish training data summaries, comply with copyright law, and label AI-generated content. For financial institutions using third-party generative models, the deployer obligations are more relevant: if you use generative AI in a high-risk context (credit decisions, insurance pricing), you must ensure human oversight, maintain logs, and provide explanations to affected individuals.

The UK's approach puts the onus on the firm. The FCA's 2024 feedback on AI in financial services made clear that existing rules on governance, accountability, and consumer outcomes apply to generative AI without new legislation. The Senior Managers and Certification Regime (SM&CR) means someone in the firm is personally accountable for the outputs of a generative system that affects customers.

Adoption is moving faster internally than externally. A 2024 survey by the Bank of England found that 75 per cent of UK financial firms were using or piloting generative AI, predominantly for internal tasks: drafting, summarisation, and code assistance. Customer-facing deployments remain a minority, with firms citing accuracy risk and regulatory uncertainty as the primary barriers.

How AI changes this

The clearest wins are in knowledge work that currently absorbs skilled human time. Drafting first versions of compliance reports, summarising lengthy board packs, generating test cases for software development, and creating first drafts of client communications. In each case, the pattern is the same: the model produces a draft, a human refines it. Productivity gains of 30 to 40 per cent are common for these tasks.

Synthetic data generation addresses a real constraint. Training a fraud detection model requires examples of fraud, but fraud is rare by definition. Generative models can create realistic synthetic fraud patterns that augment the training set without compromising real customer data. The same principle applies to stress testing scenarios for risk models.

The automation of routine correspondence is production-ready for tightly scoped use cases. An insurer generating standard policy renewal letters, a bank producing account closure confirmations, or an asset manager creating periodic fund commentaries. The key word is "standard." Generative AI is reliable when the output space is constrained. It becomes unpredictable when the output space is open.

What to know before you start

Define the boundary between generation and decision. Generative AI can draft a claims denial letter. It should not decide to deny the claim. The model produces content; a human (or a validated deterministic system) makes the decision. Conflating generation with decision-making is the fastest route to regulatory trouble.

Cost is not trivial at scale. A single LLM query costs fractions of a penny, but processing 100,000 documents per day adds up. Build a cost model before committing to a production deployment. Compare the per-unit cost of the AI pipeline against the fully loaded cost of the human process it replaces. Include the cost of human review, because for most financial services applications, you will still need it.

Intellectual property questions are unresolved. If a generative model drafts a client report, who owns the output? If it reproduces language from a copyrighted source, who is liable? Your legal team needs to review the terms of service of any generative AI provider and assess IP risk before the output reaches a client. The data governance framework should cover model outputs as well as inputs.

Start with one high-volume, low-risk internal process. Summarising call transcripts, drafting internal status reports, or generating code documentation. Measure the time saved, the error rate, and the adoption. Use that data to build the business case for broader deployment. The organisations that succeed with generative AI are those that start small, measure honestly, and expand based on evidence.

Last updated

Exploring AI for your organisation? There are fifteen minutes on the calendar.

Let’s build AI together
← Back to AI Glossary