Shadow AI

Last reviewed April 2026

A relationship manager pastes customer financial data into ChatGPT to generate a credit summary. A claims handler uses a personal AI tool to draft correspondence. An analyst builds a pricing model using an unapproved ML library on their laptop. None of these appear in the firm's AI inventory. None have been risk-assessed. All process sensitive data outside governed channels. Shadow AI is the unauthorised use of AI tools across the organisation, and in financial services, it creates risks that the governance framework cannot manage because it cannot see them.

What is shadow AI?

Shadow AI refers to AI tools, models, and services used within an organisation without the knowledge or approval of the central governance, risk, or IT functions. It is the AI equivalent of shadow IT: employees adopting tools that solve immediate problems without going through formal procurement, risk assessment, or security review. The tools range from consumer-grade generative AI services to departmentally built machine learning models that bypass the enterprise model risk management framework.

The drivers are understandable. Formal AI governance processes can take months. Consumer AI tools are available instantly and often free. The gap between what employees can do with ungoverned AI and what the governance process enables them to do creates an incentive to work around the controls. This is a signal that the governance framework is not serving the organisation effectively, not just that employees are non-compliant.

The risks in financial services are acute. Customer data processed through external AI services may breach data protection obligations. Decisions informed by ungoverned models may violate regulatory requirements for model risk management. AI-generated content used in customer communications may contain errors or inappropriate advice. And if an incident occurs, the firm cannot explain what happened because the system was never documented, monitored, or validated.

The landscape

Generative AI has accelerated the shadow AI problem. Before ChatGPT's public launch in November 2022, building an AI system required technical skills that limited shadow AI to data-literate teams. Now, any employee with a web browser can use AI to process data, generate content, and inform decisions. Surveys consistently find that 50 to 70 per cent of knowledge workers use AI tools at work, but only a fraction of that usage is governed.

The FCA and PRA expect firms to maintain control over all systems that process customer data or inform regulated decisions. Shadow AI that processes personal data without appropriate safeguards may breach GDPR. Shadow AI that informs credit, pricing, or claims decisions without validation may breach model risk management requirements. The regulatory risk is not hypothetical: it is a function of scale.

The ICO has issued guidance on generative AI and data protection, clarifying that organisations are responsible for personal data processed through third-party AI services, even when the processing is initiated by individual employees. This means that a single employee pasting customer data into an external AI service can create a data breach for which the organisation is liable.

How AI changes this

Network monitoring tools detect traffic to known AI service endpoints (OpenAI, Anthropic, Google, Microsoft Copilot), providing visibility into which AI services are being accessed from the corporate network. This does not prevent shadow AI (employees can use personal devices and networks) but it provides the governance function with data about the scale and nature of ungoverned AI usage.

Data loss prevention (DLP) systems can be configured to detect and block sensitive data being sent to external AI services. Rules that prevent customer identifiers, financial data, and other sensitive content from being pasted into generative AI interfaces address the most acute data protection risk. DLP is a technical control that operates alongside policy controls.

Approved AI tooling that is easier to use than shadow alternatives reduces the incentive to work around governance. An internal AI assistant that can summarise documents, draft correspondence, and answer questions, with appropriate data protection and governance controls built in, addresses the same user needs that drive shadow AI adoption. If the governed tool is as good as the ungoverned one, shadow usage declines naturally.

Regular inventory sweeps that include technical scanning alongside manual surveys catch shadow AI deployments that self-reporting misses. Scanning development environments for ML libraries, searching file shares for model artefacts, and reviewing cloud service usage logs all contribute to a more complete picture of the organisation's actual AI footprint.

What to know before you start

Prohibition does not work. Banning all AI tool usage drives shadow AI deeper underground, where it is harder to detect and more dangerous. A pragmatic approach combines clear boundaries (no customer data in external AI services, no ungoverned models in regulated decisions) with approved alternatives that meet legitimate business needs. The goal is to channel AI usage through governed pathways, not to eliminate it.

Training is the first line of defence. Most shadow AI usage is not malicious; it is uninformed. Employees who understand the data protection risks, the regulatory implications, and the availability of approved alternatives are less likely to use ungoverned tools. Invest in practical, scenario-based training rather than compliance-oriented slide decks. A 30-minute session that walks through real examples of shadow AI risk is more effective than a two-hour policy briefing.

Speed up the governance process. If the time from AI use case proposal to approved deployment is six months, shadow AI will thrive. Review your AI controls framework for bottlenecks and eliminate unnecessary friction for low-risk use cases. A governance process that takes two weeks for a low-risk internal tool and eight weeks for a high-risk customer-facing model is proportionate and reduces the incentive to bypass it.

Start by measuring the problem. Run a confidential survey and a technical scan to understand the current scale of shadow AI in your organisation. The results will inform your response: whether you need better tooling, faster governance, clearer policies, or some combination. You cannot manage what you have not measured.

Last updated

Exploring AI for your organisation? There are fifteen minutes on the calendar.

Let’s build AI together
← Back to AI Glossary