Insurance

Enterprise AI for Insurance

Where AI earns its place inside UK insurers and Lloyd's managing agents. Four operational areas where the technology is mature, the ROI is provable, and the regulatory path is clear under Solvency II and IFRS 17.

Claims processing and straight-through settlement

Most insurers still process every claim through the same queue. A windscreen chip sits behind a subsidence claim. A travel delay waits behind a total loss. The queue is first-in-first-out, and the adjusters who handle complex claims spend half their day on ones that could settle themselves.

Straight-through processing changes the economics. An AI system reads the first notice of loss, classifies the claim by type, complexity, and likely reserve, then routes it. Simple, well-documented claims (motor glass, flight delays, lost luggage below a threshold) settle automatically. Complex claims go straight to a senior adjuster with a pre-populated summary. The queue disappears. What remains is two streams: one automated, one human.

The claims triage model needs three things: structured data from the policy administration system, unstructured data from the claimant (photos, descriptions, receipts), and a rules engine that defines the straight-through boundary. The model handles classification. The rules engine handles authority. No model should approve a payment without a human-defined rule permitting it.

The regulatory position is straightforward. The FCA expects fair treatment of customers regardless of how the claim is processed. That means the straight-through path must produce outcomes at least as good as the manual path. Monitor settlement amounts, cycle times, and complaint rates across both streams. If the automated stream produces worse outcomes for any customer segment, the model has a bias problem. Fix it or pull it.

Typical results: 40-60% of motor and travel claims eligible for straight-through settlement. Cycle time for those claims drops from days to hours. Adjuster capacity freed for complex claims increases by 30-50%. The ROI is not theoretical. It is measurable within the first quarter.

Underwriting and submission triage

A commercial lines underwriter at a Lloyd's syndicate receives hundreds of submissions per week. Most arrive as broker emails with PDF attachments: a slip, a schedule, loss history, sometimes a risk survey. The underwriter reads every one, decides which to quote, and prices the ones worth pursuing. The reading takes longer than the pricing.

AI-assisted underwriting starts with submission triage, not pricing. A model reads the submission, extracts key fields (line of business, territory, limit, deductible, loss history), scores the submission against the syndicate's appetite, and presents a ranked queue. Submissions outside appetite are declined automatically with a standard response. Submissions inside appetite arrive on the underwriter's desk pre-parsed, with comparable risks from the portfolio already surfaced.

The value is in the reading, not the decision. Underwriters spend 50-70% of their time on data extraction and comparison. AI compresses that to seconds. The underwriter's judgement (the part that actually requires experience) gets applied to more submissions, faster, with better context. Hit rates improve because the underwriter is no longer fatigued by the time the good risks arrive.

Risk assessment models trained on portfolio data can surface patterns that human review misses: geographic concentrations, correlated exposures across lines, emerging loss trends in specific sectors. These are not replacement tools. They are augmentation tools. The underwriter still makes the call. The model ensures the call is informed by the full portfolio, not just the submission in front of them.

The PRA expects insurers to understand and control the models that influence underwriting decisions. Under Solvency II, any model that affects risk selection or pricing falls within model risk management requirements. Document the model's role, its inputs, its boundaries, and the human override process. An underwriter must always be able to reject the model's recommendation.

Document intelligence

Insurance runs on documents. Slips, endorsements, bordereaux, loss adjusters' reports, medical reports, survey reports, policy wordings. Every process in claims, underwriting, and policy administration begins with someone reading a document and typing its contents into a system. That is where the time goes.

Document intelligence is the infrastructure layer that makes every other AI use case possible. It is not a single model. It is a pipeline: ingest the document, classify its type, extract structured data, validate the extraction against business rules, and deliver the result to the downstream system. Get this right and every other AI project in the organisation becomes cheaper and faster.

The hard part is not extraction. It is classification and validation. Modern OCR and large language models can extract text from almost any document with high accuracy. The challenge is knowing what kind of document you are looking at (a broker slip versus an endorsement versus a loss run) and whether the extracted data makes sense (a property limit of 50 rather than 50 million is a decimal error, not a data point).

Build the validation layer as carefully as the extraction layer. Every extracted field needs a confidence score. Fields below threshold go to a human reviewer. Over time, the model learns from corrections and the threshold tightens. This is the flywheel: more documents processed means better accuracy means fewer human reviews means more capacity.

At Lloyd's, the Blueprint Two programme is pushing the market toward structured data exchange. Insurers who build document intelligence now will find it easier to comply with structured data requirements later. The investment serves two purposes: operational efficiency today and market readiness tomorrow.

Actuarial modelling and reserving

Actuarial reserving is one of the most consequential processes in an insurance company. The reserves determine the balance sheet, the capital requirement under Solvency II, and the profit reported under IFRS 17. Every quarter, actuaries build triangles, fit curves, apply judgement, and produce a number that the board signs off and the PRA scrutinises. The process is slow, manual, and concentrated in a small team.

AI does not replace actuarial judgement. It accelerates the work that surrounds it. The reserving process has three phases: data preparation, model fitting, and judgement. Data preparation (extracting claim-level data, reconciling across systems, building triangles) consumes 50-70% of the cycle. Model fitting (running chain-ladder, Bornhuetter-Ferguson, or stochastic models) is largely automated already. Judgement (selecting factors, adjusting for large losses, accounting for emerging trends) is where the actuary earns their practising certificate.

AI targets the first phase. Automated data pipelines that reconcile claims data across systems, flag anomalies, and build development triangles without manual intervention. The actuary starts with clean data rather than spending two weeks producing it. The reserving cycle compresses from six weeks to two.

Machine learning models can also serve as challenger models alongside traditional actuarial methods. A gradient-boosted model trained on claim-level features (peril, geography, claimant type, lawyer involvement) can produce reserve estimates that the actuary compares against their selected method. Divergence between the methods signals areas for deeper investigation. Agreement builds confidence. Neither method replaces the other. Together they are more robust than either alone.

IFRS 17 adds a new dimension. The standard requires insurers to measure insurance contracts at current value, with explicit risk adjustments and contractual service margins. The calculations are more granular and more frequent than under IFRS 4. AI-assisted data pipelines make the transition manageable. Without them, the reporting burden risks overwhelming actuarial teams that are already stretched.

The PRA's expectations on model risk apply here. Any AI model used in the reserving process, even as a challenger, must be documented, validated, and monitored. The actuary must be able to explain why the model's output was used or not used. The audit trail is not optional. It is a regulatory requirement under both Solvency II and IFRS 17.

Last updated

We have built AI inside UK insurers and Lloyd's managing agents. If you are working on claims, underwriting, documents, or reserving, there are fifteen minutes on the calendar.

Let’s build AI together