Outsourcing and Third-Party Risk

Last reviewed April 2026

A bank uses a cloud provider's ML platform to serve its credit scoring model. The cloud provider uses a foundation model from another vendor. That vendor's model was trained on data from a third party. The bank has a contract with one provider. It has a dependency chain three layers deep, and a failure or breach at any layer affects the bank's customers. Outsourcing and third-party risk in AI creates dependencies that traditional vendor management frameworks were not designed to govern.

What is outsourcing and third-party risk?

Outsourcing and third-party risk in the AI context refers to the risks that arise when a financial institution relies on external providers for AI capabilities: cloud infrastructure, ML platforms, pre-trained models, data services, or complete AI solutions. These risks include operational dependency (the AI system fails because the provider fails), data risk (customer data is processed by external systems), model risk (the firm uses a model it did not build and cannot fully validate), and concentration risk (multiple critical systems depend on the same provider).

The dependency is often deeper than it appears. A firm that purchases an "AI-powered fraud detection system" from a vendor may be relying on that vendor's ML models, the vendor's data pipeline, the vendor's cloud infrastructure, and the vendor's training data. If the vendor's model degrades, the firm's fraud detection degrades. If the vendor's cloud provider has an outage, the firm's fraud detection goes offline. The firm bears the regulatory and operational consequences of failures in a chain it does not control.

Financial services regulators have long regulated outsourcing, but AI introduces new dimensions. The opacity of ML models makes vendor model validation harder than traditional system validation. The continuous learning capabilities of some AI systems mean the model can change without the firm's knowledge. And the concentration of AI infrastructure in a small number of cloud providers creates systemic concentration risk that individual firm-level outsourcing assessments may not capture.

The landscape

The PRA's Supervisory Statement SS2/21 on outsourcing and third-party risk management requires firms to identify, assess, manage, and monitor the risks arising from outsourcing and third-party arrangements. The PRA expects firms to maintain the same level of oversight over outsourced activities as over internally provided ones. For AI, this means the firm must be able to validate, monitor, and govern a vendor's AI model to the same standard as an internal model, a requirement that many vendor contracts do not support.

The FCA's operational resilience framework requires firms to understand their third-party dependencies and ensure that important business services can continue within impact tolerances when a third party fails. For AI-dependent services, this means having fallback mechanisms for vendor-provided AI capabilities, which is technically and contractually more complex than for traditional outsourced services.

The Bank of England's critical third-party regime, which took effect in 2025, gives the PRA and FCA direct oversight of third-party providers that are critical to the stability of the UK financial sector. Cloud infrastructure providers that host AI systems for multiple financial institutions are likely candidates for designation as critical third parties. This regime addresses the systemic concentration risk that firm-level outsourcing management cannot.

How AI changes this

Vendor AI model monitoring extends the firm's model risk management framework to vendor-provided models. Rather than relying solely on the vendor's validation, the firm independently monitors the model's performance on its own data: tracking accuracy, fairness, and stability over time. When the vendor updates the model, the firm's monitoring detects any change in behaviour, triggering assessment of whether the update is appropriate for its use case.

Contract provisions for AI transparency are evolving. Leading practices include contractual rights to model documentation (model cards, training data descriptions, validation results), notification of model changes, access to performance data, and the ability to conduct independent validation testing. These provisions give the firm the information it needs to meet its regulatory obligations for model governance, even when the model is externally provided.

Supply chain mapping for AI identifies every provider in the dependency chain, from the cloud infrastructure provider to the model vendor to the data provider. For each link, the firm assesses the risk of failure, the impact on its AI systems, and the available mitigations. This mapping is analogous to the supply chain analysis that manufacturing firms conduct, adapted for the AI technology stack.

Concentration risk analysis identifies where multiple AI systems depend on the same provider. If five critical AI systems all run on the same cloud provider's ML platform, an outage at that provider disables five services simultaneously. The firm must assess whether this concentration is acceptable given its operational resilience tolerances and whether diversification is feasible and cost-effective.

What to know before you start

Negotiate AI-specific contract terms before procurement, not after. Once a vendor's AI system is embedded in your operations, your leverage to negotiate transparency, validation access, and change notification is significantly reduced. Include AI governance requirements in the procurement process: model documentation, performance data access, change notification, validation rights, and exit provisions.

Vendor models are in scope for model validation. The PRA's SS1/23 requires that all material models be validated, including vendor-provided ones. The validation approach differs from internal models (you may not have access to source code or training data), but the obligation is the same. Negotiate the access and documentation rights needed for validation at procurement, and include validation costs in the total cost of ownership assessment.

Plan for vendor exit. If you need to replace a vendor's AI model, how long will the transition take? What data and documentation do you need from the outgoing vendor? Can you operate in a fallback mode during the transition? Exit planning for AI vendors is more complex than for traditional technology vendors because of the data dependency, model retraining requirements, and integration complexity. Include exit provisions in contracts and test the exit plan periodically.

Start by mapping your AI third-party dependencies. For each AI system, identify every external provider in the dependency chain. Assess each provider against your outsourcing risk framework, with specific attention to AI-related risks: model opacity, continuous learning, data processing, and concentration. The mapping will reveal risks that your current vendor management processes may not have captured and inform both contract remediation and governance framework updates.

Last updated

Exploring AI for your organisation? There are fifteen minutes on the calendar.

Let’s build AI together
← Back to AI Glossary