GDPR Automated Decision-Making
Last reviewed April 2026
A customer's insurance application is declined by an AI model in three seconds. No human saw the application. No human reviewed the decision. The customer receives a letter stating the outcome with no explanation of the logic. Under GDPR, this scenario is unlawful. GDPR automated decision-making provisions give individuals specific rights when decisions about them are made solely by machines, and financial services is the sector where these provisions have the sharpest teeth.
What is GDPR automated decision-making?
Article 22 of the UK GDPR provides that individuals have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects or similarly significant effects. Financial services decisions, credit approvals, insurance pricing, account opening, fraud alerts, clearly produce similarly significant effects. The right is not absolute: automated decisions are permitted where they are necessary for a contract, authorised by law, or based on explicit consent. But even where permitted, additional safeguards apply.
The safeguards required under Article 22(3) are specific: the right to obtain human intervention, the right to express one's point of view, and the right to contest the decision. These are not procedural formalities. They require the firm to have a functioning mechanism for human review of automated decisions, a channel for the customer to present additional information, and a process for reconsidering the decision based on that input. Many firms that use AI in decision-making have not implemented all three safeguards effectively.
Article 22 interacts with the transparency provisions of Articles 13 and 14, which require firms to inform individuals about the existence of automated decision-making, meaningful information about the logic involved, and the significance and envisaged consequences. The ICO's guidance clarifies that "meaningful information about the logic" does not require disclosing the algorithm but does require explaining the factors, their significance, and how they influence the outcome.
The landscape
The ICO has issued detailed guidance on automated decision-making and profiling, including specific examples from financial services. The guidance clarifies that a decision is "solely automated" if no human has meaningful involvement in the decision. A human who rubber-stamps an automated recommendation without genuine consideration is not providing meaningful involvement. This interpretation, confirmed by the FCA, means that many AI-assisted decision processes in financial services qualify as solely automated unless the human review is substantive.
The EU AI Act creates additional obligations that overlay GDPR requirements. For high-risk AI systems, the Act requires transparency, human oversight, and technical documentation that go beyond GDPR's provisions. UK firms with EU operations must comply with both regimes, which align in principle but differ in specific requirements. The GDPR provides individual rights; the EU AI Act provides system-level governance requirements.
Enforcement is increasing. The ICO has investigated automated decision-making in financial services, and European data protection authorities have issued significant fines for GDPR violations related to automated decisions. The intersection of data protection enforcement, FCA conduct regulation, and emerging AI regulation creates a multi-layered compliance obligation that firms must address coherently.
How AI changes this
Automated Article 22 compliance mechanisms can be built into the AI decision pipeline. Before a decision is finalised, the system checks whether it qualifies as solely automated and, if so, ensures that the required safeguards are in place: the customer is informed, meaningful human review is available, and the customer's right to contest is enabled. This compliance-by-design approach is more reliable than relying on process compliance alone.
Explanation generation tools produce the "meaningful information about the logic" that Articles 13 and 14 require. For a credit decision, this might be: "Your application was assessed based on your income, employment history, existing debt, and payment history. The main factors in the decision were [factors]. If you believe relevant information was not considered, you can request a review." This combines the logic explanation with the Article 22 safeguard in a single communication.
Human-in-the-loop architecture that satisfies Article 22's requirements provides genuine, not nominal, human involvement. This means designing the review process so that the human has access to all relevant information, sufficient time to form an independent judgement, and the authority to override the automated decision. Monitoring the override rate, as discussed in the HITL context, provides evidence that the human involvement is meaningful.
Subject access request automation retrieves the Article 22-specific information for individual decisions: what data was processed, what automated decision was made, what logic was applied, and what safeguards are available. This is a specific application of the broader auditability infrastructure, configured to produce the output that GDPR requires for individual requests.
What to know before you start
Map every AI-driven decision process and classify it under Article 22. Is the decision solely automated? Does it produce legal or similarly significant effects? If both, the Article 22 safeguards are mandatory. Many firms have not conducted this mapping and may be operating AI systems that trigger Article 22 without the required safeguards. The mapping exercise is a priority compliance action.
Meaningful human involvement is the most practical route to compliance. If a human with appropriate authority and information reviews AI decisions before they take effect, Article 22's restriction on solely automated decisions does not apply. But the involvement must be genuine. The ICO tests this by looking at the reviewer's qualifications, the time available for review, the information provided, and the rate at which automated recommendations are overridden.
The right to contest requires a process that actually reconsiders the decision. A complaints process that reviews whether the AI followed its rules is not the same as a process that considers whether the outcome was correct for this individual. The reconsideration should be able to override the automated decision on the merits, not just confirm that the system worked as designed.
Start with your customer-facing automated decisions: credit, insurance, account management, and fraud. For each, document the decision process, the degree of human involvement, the transparency provided, and the reconsideration mechanism. Remediate gaps against Article 22 requirements and the ICO's guidance. This is a compliance baseline that should be completed before expanding AI deployment into new customer-facing areas.
Last updated
Exploring AI for your organisation? There are fifteen minutes on the calendar.
Let’s build AI together