To give you just one sense of the mastery of Mullainathan’s machine, it flagged 1 percent of all the defendants as “high risk.” These are the people the computer thought should never be released prior to trial. According to the machine’s calculations, well over half of the people in that high-risk group would commit another crime if let out on bail. When the human judges looked at that same group of bad apples, though, they didn’t identify them as dangerous at all. They released 48.5 percent of them! “Many of the defendants flagged by the algorithm as high risk are treated by the judge as if they were low risk,” Team Mullainathan concluded in a particularly devastating passage. “Performing this exercise suggests that judges are not simply setting a high threshold for detention but are mis-ranking defendants.…The marginal defendants they select to detain are drawn from throughout the entire predicted risk distribution.”
Yet another statistic that shows how leaving things to the machines can lead to generally better outcomes as it is difficult for us to overcome our human biases when making decisions.
I’m not advocating for AI to replace judges though, I just simply think that this is a good illustration of how even the most experienced of us can be prone to making bad decisions. This is especially so when we make decisions where feedback is not immediate or obvious, such as in the case of a judge choosing release someone on bail. I don’t think there are data analysts waiting to give each judge statistics of their own performance detailing what factors they have missed. But maybe we should.
The bigger lesson is how we should learn to be more open to judging others. Just like how we can be complex creatures, and both good and bad, others can too. Demonizing the other side like what we see in our political sphere today is extremely unhealthy in a democracy.