Noise, Daniel Kahneman;Olivier Sibony;Cass R. Sunstein – 5

The potentially high costs of noise reduction often come up in the context of algorithms, where there are growing objections to “algorithmic bias.” As we have seen, algorithms eliminate noise and often seem appealing for that reason. Indeed, much of this book might be taken as an argument for greater reliance on algorithms, simply because they are noiseless. But as we have also seen, noise reduction can come at an intolerable cost if greater reliance on algorithms increases discrimination on the basis of race and gender, or against members of disadvantaged groups. There are widespread fears that algorithms will in fact have that discriminatory consequence, which is undoubtedly a serious risk. In Weapons of Math Destruction, mathematician Cathy O’Neil urges that reliance on big data and decision by algorithm can embed prejudice, increase inequality, and threaten democracy itself. According to another skeptical account, “potentially biased mathematical models are remaking our lives—and neither the companies responsible for developing them nor the government is interested in addressing the problem.” According to ProPublica, an independent investigative journalism organization, COMPAS, an algorithm widely used in recidivism risk assessments, is strongly biased against members of racial minorities. No one should doubt that it is possible—even easy—to create an algorithm that is noise-free but also racist, sexist, or otherwise biased. An algorithm that explicitly uses the color of a defendant’s skin to determine whether that person should be granted bail would discriminate (and its use would be unlawful in many nations). An algorithm that takes account of whether job applicants might become pregnant would discriminate against women. In these and other cases, algorithms could eliminate unwanted variability in judgment but also embed unacceptable bias. In principle, we should be able to design an algorithm that does not take account of race or gender. Indeed, an algorithm could be designed that disregards race or gender entirely. The more challenging problem, now receiving a great deal of attention, is that an algorithm could discriminate and, in that sense, turn out to be biased, even when it does not overtly use race and gender as predictors.

Can algorithms provide a fairer way to make judgements and decisions? Humans are prone to bias and we tend to be slaves to our emotions by the moment. All that depends on how you define fair.

If you aim to be logically consistent all the time, using algorithms to make decisions can perpetuate widespread discrimination and takes the individual element out of things. While we all already tend to stereotype and make assumptions based on certain visual characteristics, algorithms sets that in stone.

If you do want to allow for each situation to have its own judgement where context is taken into place, inefficiencies and discrepancies are inevitable. What then matters is the way we measure and level of discrepancies that we are willing to accept.

While it might seem that the book is saying that humans are guilty of noise, purely resorting to algorithms and rules will cause many people to be discriminated even more heavily without consideration for context. Now that seems like your dystopian fiction come true.

Share