26 April 2026  ·  8 min read

The Justification Gap

UK indirect discrimination law demands that employers justify disparate impact — but when the decision was made by an opaque algorithm, the justification defence may be structurally unavailable.
DiscriminationEqualityAIIndirect Discrimination

Martin Sandbu's recent Financial Times column posed a deceptively simple question: can AI discriminate if it cannot justify itself?1 His argument—that algorithmic decision-making must be swept inside the democratic demand for justification—drew on two of the twentieth century's most influential political philosophers. For Habermas, the legitimacy of any norm depends on whether it could be accepted by all affected persons through rational discourse; a decision imposed without reasons is, on this account, a failure of democratic legitimacy before it is anything else.2 For Rawls, the exercise of political power is legitimate only when conducted in accordance with principles that all citizens could reasonably endorse—his "liberal principle of legitimacy" makes reason-giving a precondition of just governance, not a courtesy.3

These are not abstract commitments. They describe, in philosophical language, exactly what the law of indirect discrimination already requires: that when a criterion treats groups differently, the person applying it must give reasons sufficient to justify the difference. For employment practitioners in this jurisdiction, the question Sandbu raises is not philosophical. It is statutory. The Equality Act 2010 already demands exactly that justification, and has done for years.

The occasion for Sandbu's piece was xAI's lawsuit against Colorado over its new algorithmic discrimination statute, SB 205, which requires deployers of AI systems to use reasonable care to avoid algorithmic discrimination, including impact assessments and transparency obligations. Whether or not xAI's First Amendment challenge succeeds, the statute addresses a real problem: automated systems making consequential decisions about people without anyone being able to say why. But the interesting question for UK practitioners is not what Colorado has done. It is what the Equality Act already requires—and whether that framework can bear the weight of a technology it never anticipated.

The Architecture of Justification

Section 19 of the Equality Act 2010 prohibits indirect discrimination: the application of a provision, criterion or practice that puts persons sharing a protected characteristic at a particular disadvantage, unless the respondent can show that the PCP is a proportionate means of achieving a legitimate aim. The justification defence is the load-bearing wall of indirect discrimination law. Without it, virtually any prima facie neutral criterion—from length-of-service requirements to physical fitness thresholds—would be potentially unlawful.

The test is well-established. In Bilka-Kaufhaus GmbH v Weber Von Hartz, the ECJ required that the measure "correspond to a real need on the part of the undertaking" and be "appropriate with a view to achieving the objective in question and necessary to that end." The domestic formulation, refined in Hardy & Hansons plc v Lax, asks whether the measure falls within the range of reasonable responses—not whether the tribunal would have adopted a different approach, but whether a reasonable employer could have considered the means proportionate.

In Homer v Chief Constable of West Yorkshire, the Supreme Court confirmed that proportionality requires a genuine balancing exercise between the discriminatory effect of the PCP and the reasonable needs of the party applying it. And in Seldon v Clarkson Wright and Jakes, their Lordships held that the legitimate aim must be identified with some specificity—social policy objectives of a general kind may suffice in certain age discrimination contexts, but the means must still be rationally connected to the aim and no more discriminatory than necessary.

Every step of this framework assumes something about the decision-maker: that it chose the criterion, that it can explain why, and that a tribunal can assess whether the explanation holds up.

The Essop Paradox

Essop v Home Office established that a claimant need not demonstrate why a PCP puts their group at a particular disadvantage. The statistical fact of group disadvantage is sufficient. Lady Hale was explicit: "it is not necessary to establish the reason why a particular PCP puts one group at a disadvantage when compared with others."

For claimants challenging algorithmic decisions, this is a gift. If an AI-driven recruitment tool screens out a disproportionate number of candidates sharing a protected characteristic, the claimant need not reverse-engineer the model to establish prima facie indirect discrimination. The statistical output is enough.

But here is the paradox. While the claimant is relieved of the burden of explaining the mechanism of disadvantage, the respondent employer must still discharge the burden of justification. And justification requires the respondent to articulate, with specificity, both the legitimate aim being pursued and why the PCP is a proportionate means of achieving it. The asymmetry that Essop created—generous to claimants on the way in, demanding of respondents on the way out—becomes acute when the decision under scrutiny was made by a system that neither party fully understands.

The Black Box

When a human decision-maker applies a length-of-service criterion, the legitimate aim is readily articulable: rewarding loyalty, retaining experience, reflecting the value of institutional knowledge. The proportionality analysis is tractable because the criterion is transparent and the causal chain is visible.

When an AI system applies a criterion—or, more precisely, when the model's weighting of features produces a pattern that functions as a PCP—the position is fundamentally different. The employer may not know which features the model has weighted. It may not know why those features correlate with the output. The "criterion" may be a complex interaction of variables that no human selected and no human can fully articulate.

How does an employer demonstrate that an opaque algorithmic weighting is a proportionate means of achieving a legitimate aim when it cannot identify the means with precision? The Hardy & Hansons test asks whether a reasonable employer could consider the means proportionate. But a reasonable employer who cannot describe the means cannot be said to have considered their proportionality at all.

The Gap

This is the justification gap. The Equality Act's architecture assumes a model of decision-making in which criteria are chosen by humans, for articulable reasons, and can be examined for proportionality. Algorithmic decision-making breaks each of these assumptions. The criteria are emergent, the reasons are latent, and examination requires a form of technical audit for which neither the tribunal system nor the typical respondent is presently equipped.

The gap is not merely practical. It is doctrinal. If an employer cannot articulate the PCP with sufficient clarity to permit proportionality analysis, the justification defence is structurally unavailable—not because the employer is acting unreasonably, but because the defence presupposes a degree of epistemic access that opaque AI denies.

I should be clear about where this leads, because it is not a comfortable conclusion. An employer that deploys an AI system it cannot explain has, in effect, adopted a PCP it cannot justify. The doctrinal consequence is stark: the justification defence fails, and the discrimination is unlawful. The employer chose to delegate a consequential decision to a system whose reasoning it cannot reconstruct. The Equality Act does not require the employer to succeed in justifying—it requires the employer to be capable of justifying. If the employer has deprived itself of that capability by design, the Act should not bail it out.

The Regulatory Landscape

The EU AI Act, which entered into force in August 2024, classifies AI systems used in employment, workers management and access to self-employment as high-risk (Annex III, point 4). High-risk systems must be designed to allow human oversight and must be sufficiently transparent that deployers can interpret the system's output and intervene where necessary. The UK, post-Brexit, has no equivalent statutory framework. The government's preference for a "pro-innovation" approach has produced guidance rather than legislation.

For UK practitioners, the absence of AI-specific regulation does not mean the absence of legal risk. The Equality Act applies to outcomes, not methods. An employer that achieves a discriminatory outcome through AI is in no better position than one that achieves it through a human prejudice. The difference is that the human prejudice may, paradoxically, be easier to justify—or at least easier to diagnose and correct.

Practical Implications

For respondent practitioners and in-house counsel, the implications are immediate:

For claimant practitioners, the Essop framework is already fit for purpose. Statistical evidence of disparate impact shifts the burden. The claimant need not decode the algorithm. The question is whether the respondent can explain it—and increasingly, the answer will be that it cannot.

The framework has always demanded reasons. It is not about to make an exception for machines that cannot give them.


Table of Authorities

Case Relevance
Bilka-Kaufhaus GmbH v Weber Von Hartz [1986] ECR 1607 KB → Origin of the objective justification test: the measure must correspond to a real need and be appropriate and necessary
Essop v Home Office [2017] IRLR 558 KB → Claimant need not show why a PCP causes group disadvantage; statistical disparity suffices
Hardy & Hansons plc v Lax [2005] IRLR 726 KB → Proportionality test: whether a reasonable employer could consider the means proportionate to the aim
Homer v Chief Constable of West Yorkshire [2012] UKSC 15 KB → Proportionality requires balancing the discriminatory effect against the respondent's reasonable needs
Seldon v Clarkson Wright and Jakes [2012] UKSC 16 KB → Legitimate aim must be identified with specificity; means must be rationally connected and no more discriminatory than necessary

Notes

  1. Martin Sandbu, 'Can AI discriminate if it can't justify itself?', Financial Times, 26 April 2026.
  2. Jürgen Habermas, Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy (trans. William Rehg, MIT Press, 1996), especially chs 3–4. Habermas's discourse principle holds that only those norms are valid to which all affected persons could agree as participants in rational discourse.
  3. John Rawls, Political Liberalism (Columbia University Press, 1993), lecture VI. The liberal principle of legitimacy provides that the exercise of political power is fully proper only when exercised in accordance with a constitution the essentials of which all citizens may reasonably be expected to endorse.

Alex MacMillan is an employment law barrister at St Philips Chambers. This article is for informational purposes and does not constitute legal advice.

Visit alexmacmillan.co.uk →