Farewell to the cakeism of Johnson and Truss, welcome back reality


You can’t always get what you want, a young man once sang. It’s a simple aphorism, but one worth remembering. Boris Johnson was widely — and rightly — mocked in 2016 for announcing that “our policy is having our cake and eating it”. That was a dishonest refusal to admit that the Brexit referendum had obliged the UK government to make some painful decisions.

But it is not always so easy to see when Mick Jagger’s maxim is in play. Consider the question of whether algorithms make fair decisions. In 2016, a team of reporters at ProPublica, led by Julia Angwin, published an article titled “Machine bias”. It was the result of more than a year’s investigation into an algorithm called Compas, which was being widely used in the US justice system to make recommendations concerning parole, pre-trial detention and sentencing.

Angwin’s team concluded that Compas was much more likely to rate white defendants as lower risk than black defendants. What’s more, “black defendants were twice as likely to be rated as higher risk but not reoffend. And white defendants were twice as likely to be charged with new crimes after being classed as low risk.”

That seems bad. Northpointe, the makers of Compas, pointed out that black and white defendants given a risk rating of, say, 3 had an equal chance of being rearrested. The same was true for black and white defendants with a risk rating of 7, or any other rating. The risk scores meant the same thing, irrespective of race.

Shortly after ProPublica and Northpointe produced their findings, rebuttals and counter-rebuttals, several teams of academics published papers making a simple but surprising point: there are several different definitions of what it means to be “fair” or “unbiased”, and it is arithmetically impossible to be fair in all these ways at once. An algorithm could satisfy ProPublica’s definition of fairness or it could satisfy Northpointe’s, but not both.

Here’s Corbett-Davies, Pierson, Feller and Goel: “It’s actually impossible for a risk score to satisfy both fairness criteria at the same time.” Or Kleinberg, Mullainathan and Raghavan: “We formalise three fairness conditions . . . and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously.”

This is not just a fact about algorithms. Whether decisions about parole are made by human judges, robots or dart-throwing chimps, the same relentless arithmetic would apply.

We need more scrutiny and less credulity about the life-changing magic of algorithmic decision making, so for shining a spotlight on the automation of the gravest judgments, ProPublica’s analysis was invaluable. But if we are to improve algorithmic decision making, we need to remember Jagger’s aphorism. These decisions cannot be “fair” on every possible metric. When it is impossible to have it all, we will have to choose what really matters.

Painful choices are, of course, the bread and butter of economics. There is a particular type which seems to fascinate economists: the “impossible trinity”. The wisest of all impossible trinities will be well known to…



Read More: Farewell to the cakeism of Johnson and Truss, welcome back reality

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More