A solicitor asks for your prospects assessment. You say 60/40 in the claimant's favour. She asks how you arrived at that figure. You cite the documentary evidence, the weakness of the respondent's main witness on the key meeting, and the tribunal's likely approach to credibility. What you do not mention — because you have not checked, or because you regard it as irrelevant — is what the published success rate for that claim type was at final hearing last year.
That missing number is a base rate. And the decision to ignore it is not a neutral analytical choice. It is a well-documented cognitive error with a substantial literature behind it — one that employment practitioners have, with some determination, managed to avoid.
The Outside View
Michael Mauboussin of Morgan Stanley Investment Management published a paper — Bayes and Base Rates — making an argument that investment analysts have resisted for decades: that starting with the base rate and adjusting for case-specific evidence produces better forecasts than starting with case-specific evidence alone. His framework is Bayesian in structure. Begin with a prior probability derived from the reference class — what happened, historically, in similar situations. Then update as new evidence arrives. The base rate is not the answer. It is the starting point.
Mauboussin's immediate concern is AI revenue projections. Companies have announced five-year targets that, measured against the 75-year base rate for US public companies, have a low probability of being met. Less than 10% of infrastructure projects are completed on time and on budget. But every project team believes theirs is different. Every analyst focuses on the specific company's technology, leadership, and market position — the "inside view" — and neglects the statistical distribution of outcomes across the reference class.
The error is not that the inside view is worthless. It is that the inside view, unanchored by the outside view, drifts. Kahneman and Tversky identified this pattern fifty years ago: when people are given both base rate information and case-specific information, they overweight the specific and underweight the statistical. The more vivid and detailed the case-specific narrative, the more completely the base rate is ignored.
The Same Error in a Different Market
Robert Armstrong and Ethan Wu made a related observation in the Financial Times' Unhedged newsletter on the day Russia invaded Ukraine. Markets dropped sharply. The case-specific analysis was alarming: European energy dependency, nuclear risk, sanctions cascades, supply chain rupture. Every factor pointed to lasting disruption. But the base rate for market reactions to geopolitical shocks — including wars — suggested recovery. The case-specific evidence screamed exception. The base rate, as it has done repeatedly, proved the better guide.
Armstrong's point was not that wars do not matter. It was that even sophisticated analysts, with decades of experience and access to every available data source, systematically overweight the vivid, case-specific narrative and underweight the quiet, statistical one. The more dramatic the events, the more completely the base rate is forgotten.
Employment Law Has Base Rates Too
HMCTS publishes tribunal statistics. We know, approximately, how often unfair dismissal claims succeed at final hearing. We know the rates for discrimination claims, whistleblowing claims, and wage deduction claims. These figures vary by claim type, by year, and by jurisdiction. They are imperfect, aggregated, and subject to selection effects — the cases that reach final hearing are not a random sample of all disputes. But they exist. They are the reference class.
Almost no practitioner uses them. Ask a barrister whether the published success rate for the claim type informed their prospects assessment, and the most likely response is mild irritation. "Every case turns on its own facts." "I'm assessing the evidence, not a statistical distribution." "The base rate tells you nothing about this claimant." These responses are not irrational. They reflect a coherent professional philosophy — grounded in the common law tradition of fact-specific adjudication — that treats each case as genuinely unique. The question is whether that philosophy produces well-calibrated probability assessments. The evidence, from every field where it has been tested, suggests that it does not.
The Known Discomfort
The legal profession's resistance to Bayesian reasoning is not merely a practitioner habit. It has been formalised at the highest level. In R v T [2010] EWCA Crim 2439, the Court of Appeal effectively barred the use of likelihood ratios based on Bayesian analysis for non-DNA forensic evidence. The court's concern was that subjective probability inputs produced an "air of spurious precision." The underlying data was insufficiently reliable to generate meaningful probabilities.
Fenton, Neil and Lagnado, in their 2013 paper in Cognitive Science, documented the broader pattern. The legal profession's discomfort with formal probabilistic reasoning is deep and principled. It extends beyond courtroom evidence to case assessment, settlement valuation, and strategic decision-making. Their analysis demonstrates that Bayesian networks can model complex evidential structures — handling dependent evidence, "explaining away" patterns, and the kind of multi-layered reasoning that employment tribunals perform daily. The Sally Clark prosecution — where two cot deaths in one family were presented as a 1 in 73 million coincidence, by treating the deaths as independent events when they were not — illustrated what happens when probabilistic reasoning is done badly. But the legal profession drew the opposite lesson: not that probabilistic reasoning needs to be done properly, but that it should not be done at all.
The Barrister's Objection
The strongest form of the argument against base rates in case assessment runs roughly as follows. A prospects assessment is holistic. It cannot be decomposed into independent variables without destroying the very thing that makes it valuable — the practitioner's integrated judgment of how the evidence, the law, the tribunal, and the witnesses interact. Base rates, on this view, are not merely irrelevant but actively misleading: they invite practitioners to anchor on a statistical average that has nothing to do with the specific configuration of facts before them.
I have had this conversation, in various forms, with a number of fellow practitioners at the employment Bar. The resistance is genuine and forceful. The objection is not that statistics are generally worthless, but that they are worthless here — in the assessment of a specific case by a practitioner with specific expertise. The base rate for unfair dismissal claims tells you nothing, on this view, because the barrister's judgment already incorporates everything that the base rate could add and more. To start with the base rate and adjust is to subordinate expert judgment to a crude statistical prior.
This is a serious objection. It rests, however, on an empirical claim — that expert judgment, unaided by base rate calibration, is well-calibrated — that has been tested extensively in other domains and found wanting. Philip Tetlock's work on expert political judgment, Kahneman's work on clinical versus actuarial prediction, and Mauboussin's evidence on investment forecasting all point in the same direction: experts who ignore base rates produce less accurate probability assessments than experts who start with the base rate and adjust. The adjustment is where the expertise lives. The base rate is where the calibration lives. You need both.
A Modest Experiment
The case success predictor on this site attempts something limited: it combines a practitioner's assessment of legal merits, evidence quality, and jurisdictional strength with an empirical baseline derived from HMCTS tribunal statistics. The output is not a replacement for judgment. It is a structured way of testing whether your judgment is consistent with what the reference class suggests. If your assessment diverges sharply from the base rate, you may be right — your case may genuinely be exceptional. But the divergence itself is information, and it should prompt a question: what specifically makes this case different?
The evidence updater takes the logic one step further. Given an initial probability assessment, it recalculates as new evidence arrives — in the way that Mauboussin's framework suggests. Begin with a prior, update on evidence, track the shift. Neither tool will tell a practitioner anything they could not work out for themselves. Both will tell them something about the reliability of the working-out.
The Irreducible Question
The base rate will not tell you whether your client will win. It will not tell you whether the judge will prefer the claimant's account or the respondent's. It will not tell you whether the key document, disclosed on day three, transforms the case. What it will tell you is something about the reliability of your belief that these things will happen as you expect — because in every domain where that reliability has been measured, unaided expert judgment has been found to be overconfident, inconsistent, and improved by anchoring to the reference class.
The discomfort at the Bar is real. The objections are principled. But the question is not whether base rates feel relevant to case assessment. It is whether ignoring them produces better assessments than using them. That is an empirical question, and the empirical answer — from investment analysis to medical diagnosis to geopolitical forecasting — has been consistent for fifty years.
Every case is different. That is precisely the argument that every fund manager, every intelligence analyst, and every medical diagnostician has made. In each field, the base rate has improved their predictions anyway. Employment law is not special. The data is there. The question is whether anyone wants to look at it.
References
- Mauboussin, M. J., 'Bayes and Base Rates', Consilient Observer, Morgan Stanley Investment Management
- Fenton, N., Neil, M. & Lagnado, D. A. (2013), 'A General Structure for Legal Arguments About Evidence Using Bayesian Networks', Cognitive Science, 37, 61–102
- Armstrong, R. & Wu, E., 'The war and markets', Unhedged, Financial Times
- R v T [2010] EWCA Crim 2439