Algorithmic Discrimination
Last reviewed April 2026
An insurer's AI pricing model does not know the customer's ethnicity. It does not need to. It uses postcode, vehicle age, occupation, and credit history, features that correlate with ethnicity strongly enough that the model's outputs produce a 23 per cent price disparity between ethnic groups, for policies with identical risk profiles. Algorithmic discrimination is the systematic production of unfair outcomes by AI systems, and it does not require discriminatory intent. It requires only discriminatory data.
What is algorithmic discrimination?
Algorithmic discrimination occurs when an AI system produces outcomes that systematically disadvantage individuals based on protected characteristics, whether or not those characteristics are used as explicit inputs. It is the computational manifestation of bias in AI systems, made visible in the decisions that affect people's lives: who gets credit, at what price, who gets insured, whose claim is flagged for investigation, and whose application is fast-tracked or delayed.
The mechanism is proxy discrimination. Machine learning models identify patterns that predict outcomes. If historical outcomes were shaped by discrimination, the patterns the model learns reflect that discrimination. A lending model trained on historical approval data learns that applicants from certain areas are declined more often, not because they are riskier but because they were historically subject to discriminatory lending practices. The model perpetuates the past and labels it prediction.
The legal framework is clear. The Equality Act 2010 prohibits indirect discrimination: a provision, criterion, or practice that puts persons sharing a protected characteristic at a particular disadvantage, unless it can be objectively justified as a proportionate means of achieving a legitimate aim. An AI model is a provision. Its features and thresholds are criteria. And its systematically disparate outputs are the disadvantage. The Act applies to algorithms as fully as it applies to human decisions.
The landscape
The FCA has made clear that algorithmic discrimination falls within its supervisory scope. The Consumer Duty's requirement for good customer outcomes applies across the customer base, and outcomes that systematically disadvantage protected groups are not good outcomes. The FCA's 2024 feedback statement on AI specifically identified discrimination risk as a priority area for supervisory attention.
The EU AI Act addresses discrimination through its requirements for training data quality (Article 10) and risk management (Article 9). High-risk AI systems must be tested for bias, and the results must be documented. Residual biases that cannot be eliminated must be mitigated through technical or organisational measures. The Act creates a structured obligation to identify, measure, and manage discrimination risk throughout the AI lifecycle.
The ICO and the Equality and Human Rights Commission's joint guidance on AI and equality provides practical advice on testing for and mitigating algorithmic discrimination. The guidance clarifies that organisations cannot rely on the absence of protected characteristics in model inputs as a defence against discrimination claims. Proxy discrimination is indirect discrimination, and the organisation bears the burden of demonstrating objective justification.
How AI changes this
Fairness testing tools provide the technical means to detect algorithmic discrimination before and after deployment. These tools compute disparate impact ratios, equalised odds, and other fairness metrics across protected groups, identifying where model outputs systematically diverge. The US Equal Employment Opportunity Commission's "four-fifths rule" (outcomes for a protected group must be at least 80 per cent of the rate for the most favoured group) provides a widely used quantitative threshold, though the legal standard under UK equality law is not identical.
Debiasing techniques address discrimination at different points in the pipeline. Pre-processing methods modify training data to reduce historical bias. In-processing methods add fairness constraints to the model's learning objective. Post-processing methods adjust model outputs to equalise outcomes across groups. Each approach has trade-offs: accuracy, transparency, and the degree of intervention. The choice depends on the specific context and the nature of the discrimination detected.
Counterfactual analysis tests whether a model's decision would change if the protected characteristic were different. If changing a hypothetical applicant's ethnicity (and only that attribute) changes the model's output, the model is using proxies for that characteristic. This causal approach to discrimination testing is more rigorous than statistical disparity analysis and aligns more closely with how courts assess discrimination claims.
Continuous monitoring detects discrimination that emerges after deployment. A model that was fair at launch can become discriminatory as the population changes, as the data distribution shifts, or as the model interacts with other systems. Production monitoring against fairness thresholds, with automated alerts and defined escalation, catches emerging discrimination before it affects a large number of customers.
What to know before you start
The legal standard is objective justification, not statistical inevitability. A model that produces disparate outcomes is not automatically unlawful if the disparity can be justified as a proportionate means of achieving a legitimate aim (accurate risk assessment, for example). But the justification must be evidence-based, and the firm must demonstrate that less discriminatory alternatives were considered. Legal counsel should be involved in assessing whether a model's disparate impact is defensible.
Protected characteristic data is needed to test for discrimination. If you do not collect ethnicity, disability, or other protected characteristic data, you cannot test whether your models discriminate along those dimensions. Consider collecting this data specifically for fairness testing purposes, with appropriate data protection safeguards. The alternative, not testing - exposes the firm to undetected discrimination and unmanaged legal risk.
Discrimination risk increases with model complexity. A simple scorecard with a handful of features is easier to audit for proxy discrimination than a neural network with hundreds of features and complex interactions. Consider model interpretability as a discrimination control: a model whose reasoning can be inspected is a model whose discrimination can be identified and addressed.
Start with a discrimination audit of your highest-impact consumer-facing models: credit, pricing, and claims. Use fairness testing tools to compute disparity metrics across available protected characteristics. Where disparities are identified, assess whether they are justified and whether less discriminatory alternatives exist. This audit establishes a baseline and informs the design of your ongoing fairness monitoring programme.
Last updated
Exploring AI for your organisation? There are fifteen minutes on the calendar.
Let’s build AI together