Data Protection Impact Assessment (DPIA)

Last reviewed April 2026

A bank plans to deploy an AI model that analyses customer transaction data to predict churn risk. The model processes spending patterns, location data, and communication frequency across 3.2 million current account holders. Before the model goes live, the bank is legally required to assess the privacy risks. A Data Protection Impact Assessment (DPIA) is the structured process for doing so, and for AI in financial services, it is both a legal obligation and a practical safeguard against privacy failures that no technical team would identify on their own.

What is a DPIA?

A DPIA is a process designed to identify, assess, and mitigate the data protection risks of a proposed processing activity. Under UK GDPR Article 35, a DPIA is mandatory when processing is likely to result in a high risk to individuals' rights and freedoms. Three triggers are particularly relevant for AI in financial services: systematic and extensive profiling with significant effects, processing of special category data at scale, and automated decision-making that produces legal or similarly significant effects.

The DPIA must describe the processing, assess its necessity and proportionality, identify risks to individuals, and define measures to address those risks. The assessment is not a form-filling exercise. It requires genuine analysis of how the data will be processed, what could go wrong, who would be affected, and what controls are proportionate. For AI systems, this includes assessing risks from model inaccuracy, bias, data leakage, and mission creep (where data collected for one purpose is used for another).

The ICO publishes a list of processing activities that require a DPIA, which includes profiling and automated decision-making in financial services. If the proposed AI system falls within this list, the DPIA is mandatory. If it does not, a DPIA may still be prudent as a risk management tool and as evidence of compliance in the event of a complaint or investigation.

The landscape

The ICO's guidance on DPIAs, updated to reflect AI-specific considerations, provides a structured template and worked examples. The guidance emphasises that the DPIA should be conducted at the design stage, before processing begins, and that it should be a living document updated when the processing changes. For AI systems that learn and evolve, this means the DPIA must be reviewed when the model is retrained, when new data sources are added, or when the model's use is expanded.

The EU AI Act requires a "fundamental rights impact assessment" for high-risk AI systems deployed by public bodies and certain private entities. This is distinct from but related to the GDPR DPIA. For financial institutions, the practical approach is to extend the DPIA to cover the broader rights considerations (non-discrimination, fairness, transparency) that the EU AI Act addresses, producing a single assessment that satisfies both requirements.

The FCA expects firms to consider data protection risks as part of their broader risk management for AI systems. While the FCA does not mandate DPIAs (that is the ICO's role), it expects firms to demonstrate that they have assessed the risks of AI processing and implemented appropriate controls. A DPIA provides ready-made evidence of this assessment.

How AI changes this

AI introduces risks that traditional DPIAs may not adequately capture. Model inaccuracy can produce decisions that are wrong for specific individuals, with consequences that range from inconvenience (a false fraud alert) to material harm (a wrongful credit denial). Bias can produce systematically unfair outcomes for protected groups. Data drift can cause a model that was appropriate at deployment to become inappropriate as the population changes. These AI-specific risks must be explicitly assessed in the DPIA.

Automated DPIA tools provide templates pre-configured for AI processing, with risk categories and mitigation options specific to machine learning systems. These tools guide the assessor through the relevant questions, ensure consistency across assessments, and maintain a register of completed DPIAs. For organisations deploying multiple AI systems, the tooling ensures that no system goes live without an assessment.

The connection to the AI risk assessment should be explicit. The DPIA focuses on data protection risks; the AI risk assessment covers the broader set of risks including model risk, operational risk, and regulatory risk. Conducting both assessments in parallel, or integrating them into a single process, avoids duplication and ensures that data protection risks are considered alongside other risk categories.

Ongoing monitoring of the risks identified in the DPIA ensures that the assessment remains current. If the DPIA identified bias risk and specified fairness monitoring as a mitigation, the monitoring must actually be in place and producing results. A DPIA that identifies risks but does not verify that mitigations are effective is an incomplete control.

What to know before you start

Conduct the DPIA before development, not before deployment. If the assessment identifies fundamental privacy risks that require design changes, discovering this after the model is built wastes the development investment. The DPIA at the design stage shapes the system's architecture: what data is collected, how it is processed, what safeguards are built in, and what limitations apply.

Involve your Data Protection Officer (DPO) and, where necessary, the ICO. Article 36 requires consultation with the ICO when the DPIA indicates that the processing would result in high risk that the firm cannot mitigate. This threshold is relevant for AI systems that make automated decisions about large populations based on profiling. Engaging the ICO early, before a complaint or investigation, demonstrates good faith and may avoid enforcement action.

The DPIA must be reviewed when the processing changes. For AI systems, this means reviewing the assessment when the model is retrained on new data, when new data sources are added, when the model's use is expanded to new customer segments, or when the model's performance degrades. Define triggers for DPIA review in your controls framework so that reviews happen systematically rather than ad hoc.

Start with AI systems that process personal data for automated decision-making. These are the systems most likely to trigger the DPIA obligation and most likely to attract ICO scrutiny. Complete DPIAs for these systems first, using the ICO's template and guidance. Build the DPIA process into your AI governance lifecycle so that every new AI system that processes personal data is assessed before development begins.

Last updated

Exploring AI for your organisation? There are fifteen minutes on the calendar.

Let’s build AI together
← Back to AI Glossary