Daily Tao

Radical Uncertainty, Mervyn King;John Kay – 1

The crisis of 2007–08 represented – obviously – a failure of economic analysis and economic policy. But while recognising the seriousness and cost of the financial crisis, economists have generally been reluctant to accept that their intellectual framework is in need of revision. Economists (used to) distinguish risk, by which they meant unknowns which could be described with probabilities, from uncertainty, which could not. They had already adopted mathematical techniques which gave the term ‘risk’ a different meaning from that of everyday usage. In this book we will describe the considerable confusion and economic damage which has arisen as a result of the failure to recognise that the terms ‘risk’, ‘uncertainty’ and ‘rationality’ have acquired technical meanings in economics which do not correspond to the everyday use of these words. And over the last century economists have attempted to elide that historic distinction between risk and uncertainty, and to apply probabilities to every instance of our imperfect knowledge of the future. The difference between risk and uncertainty was the subject of lively debate in the inter-war period. Two great economists – Frank Knight in Chicago and John Maynard Keynes in Cambridge, England – argued forcefully for the continued importance of the distinction. Knight observed that ‘a measurable uncertainty, or “risk” proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all’. Keynes made a similar distinction. In an article summarising his magnum opus, The General Theory of Employment, Interest and Money , he wrote: By ‘uncertain’ knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty; nor is the prospect of a Victory bond being drawn. Or, again, the expectation of life is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth-owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know. The title of this book, and its central concept, is radical uncertainty . Uncertainty is the result of our incomplete knowledge of the world, or about the connection between our present actions and their future outcomes. Depending on the nature of the uncertainty, such incomplete knowledge may be distressing or pleasurable. I am fearful of the sentence the judge will impose, but look forward to new experiences on my forthcoming holiday. We might sometimes wish we had perfect foresight, so that nothing the future might hold could surprise us, but a little reflection will tell us that such a world would be a dull place. We have chosen to replace the distinction between risk and uncertainty deployed by Knight and Keynes with a distinction between resolvable and radical uncertainty. Resolvable uncertainty is uncertainty which can be removed by looking something up (I am uncertain which city is the capital of Pennsylvania) or which can be represented by a known probability distribution of outcomes (the spin of a roulette wheel). With radical uncertainty, however, there is no similar means of resolving the uncertainty – we simply do not know. Radical uncertainty has many dimensions: obscurity; ignorance; vagueness; ambiguity; ill-defined problems; and a lack of information that in some cases but not all we might hope to rectify at a future date. These aspects of uncertainty are the stuff of everyday experience.

Wanted to cover more passages of the previous book. After some conversations though, I couldn’t really find any new fresh insights beyond the picture of the framework that I had already posted.

One thing that struck me was that most books, for commercial purposes, try to jam fit complex and intricate narratives into 1 model or framework of thinking. After all, simplicity sells. Think of books like grit, vulnerability, good to great. Books with a simple common narrative that works onto a Ted Talk. The success of an idea usually doesn’t lie in how accurate it is, but how easy it sticks. So as we move to cover new books, do always consider the above and never let 1 single narrative dictate your thoughts. After all, multi-model thinking usually leads to much more accurate judgements and predictions.

Moving on, this book is about Radical Uncertainty, where historical data offers zero guidance in helping one predict future outcomes. One example in the book was about Obama’s decision to send in the Navy Seals to capture Osama Bin Laden. He had no historical data to refer to, estimated probabilities from his analysts and ultimately had to make a judgement call. This book is about learning how to deal with this.

The Narrow Corridor, Daron Acemoglu;James A. Robinson – 1

This book is about liberty. Liberty depends on the different types of Leviathans and their evolution—whether a society will live without an effective state, put up with a despotic one, or manage to forge a balance of power that opens the way for the emergence of a Shackled Leviathan and the gradual flourishing of liberty. In contrast to Hobbes’s vision of society submitting its will to the Leviathan, which much of social science and the modern world order take for granted, it is fundamental to our theory that Leviathans are not always welcomed with open arms and their path is a rocky one, to say the least. In many instances society will resist their ascendancy and will do so successfully, just like the Tiv did and the Lebanese still do. The result of this resistance is illiberty. When this resistance crumbles, we may end up with a Despotic Leviathan, which looks a lot like the sea monster that Hobbes imagined. But this Leviathan, though it prevents Warre, does not necessarily make its subjects’ lives much richer than the “nasty, brutish, and short” existence that people eke out under the Absent Leviathan. Nor do its subjects really “submit their wills” to the Leviathan—any more than East Europeans chanting the “Internationale” in the streets before the collapse of the Berlin Wall really submitted their wills to Soviet Russia. The implications for citizens are different in some ways, but still there is no liberty. A very different type of Leviathan, a shackled one, emerges when there is a balance between its power and society’s capacity to control it. This is the Leviathan that can resolve conflicts fairly, provide public services and economic opportunities, and prevent dominance, laying down the basic foundations of liberty. This is the Leviathan that people, believing that they can control it, trust and cooperate with and allow to increase its capacity. This is the Leviathan that also promotes liberty by breaking down the various cages of norms tightly regulating behavior in society. But in a fundamental sense this is not a Hobbesian Leviathan. Its defining feature is its shackles: it does not have Hobbes’s sea monster’s dominance over society; it does not have the capability to ignore or silence people when they try to influence political decision making. It stands not above but alongside society.

Thought we begin with a diagram of the key framework used in this book. These are the same authors behind “Why Nations Fail”, the perfect book for your home bookshelf display.

Putting snarkiness aside, this book is a sequel on their previous book on extractive vs inclusive institutions with a different kind of framework. What makes liberal societies succeed? There is a right balance between the power of society vs the power of the state. The “Narrow Corridor”, in which liberty flourishes lies on a delicate balance between the power of society vs that of the state. The key point being that is the state is too powerful, then it risks becoming a despotic society. On the other hand, if citizens have too much power, then the state will not be able to act effectively on long-term plans that might be unpopular.

The key framework of the book can be simply explained by the graphic. I’ll be looking through the notes and sharing some of the key historical examples and other explanations by the author(s).

 

A Thousand Brains, Jeff Hawkins;Richard Dawkins – 5

Genes are just molecules that replicate. As genes evolve, they are not heading in any particular direction, nor is one gene intrinsically better than another, just as one molecule is not intrinsically better than any other molecule. Some genes may be better at replication, yet, as environments change, which genes are better at replicating also changes. Importantly, there is no overall direction to the changes. Life based on genes has no direction or goal. Life may manifest itself as a virus, a single-celled bacterium, or a tree. But there doesn’t appear to be any reason to suggest one life-form is better than another, beyond its ability to replicate. Knowledge is different. Knowledge has both a direction and an end goal. For example, consider gravity. In the not-too-distant past, nobody had any idea why things fell down and not up. Newton created the first successful theory of gravity. He proposed that it is a universal force, and he showed that it behaves according to a set of simple laws that could be expressed mathematically. After Newton, we would never go back to having no theory of gravity. Einstein’s explanation of gravity is better than Newton’s, and we will never go back to Newton’s theory. It wasn’t that Newton was wrong. His equations still accurately describe gravity as we experience it every day. Einstein’s theory incorporates Newton’s but better describes gravity under unusual conditions. There is a direction to knowledge. Knowledge of gravity can go from no knowledge, to Newton’s, to Einstein’s, but it can’t go in the opposite direction. In addition to a direction, knowledge has an end goal. The earliest human explorers did not know how big the Earth was. No matter how far they traveled, there was always more. Was the Earth infinite? Did it end with an edge where further travel would cause you to fall off? Nobody knew. But there was an end goal. It was assumed that there was an answer to the question, How big is the Earth? We eventually achieved that goal with a surprising answer. The Earth is a sphere, and now we know how big the Earth is. We are facing similar mysteries today. How big is the universe? Does it go on forever? Does it have an edge? Does it wrap around on itself like the Earth? Are there many universes? There are plenty of other things we don’t understand: What is time? How did life originate? How common is intelligent life? Answering these questions is a goal, and history suggests we can achieve it. A future driven by genes has little to no direction and only short-term goals: stay healthy, have kids, enjoy life. A future designed in the best interest of knowledge has both direction and end goals. The good news is we don’t have to choose one future over the other. It is possible to do both. We can continue to live on Earth, doing our best to keep it livable and trying to protect ourselves from our own worst behaviors. And we can simultaneously dedicate resources to ensuring the preservation of knowledge and the continuation of intelligence for a time in the future when we are no longer here.

Really liked the part where he said that knowledge has a direction, and that it can only move forward. Unlike genes, which goal is to self-replicate, knowledge has an end goal and that might be what many of us can get behind.

Final excerpt from this book, really liked this as an the end to a book about intelligence and how we need to see it as something that needs to be preserved.

A Thousand Brains, Jeff Hawkins;Richard Dawkins – 4

The situation we are in today reminds me of the early days of computing. The word “computer” originally referred to people whose job was to perform mathematical calculations. To create numeric tables or to decode encrypted messages, dozens of human computers would do the necessary calculations by hand. The very first electronic computers were designed to replace human computers for a specific task. For example, the best automated solution for message decryption was a machine that only decrypted messages. Computing pioneers such as Alan Turing argued that we should build “universal” computers: electronic machines that could be programmed to do any task. However, at that time, no one knew the best way to build such a computer. There was a transitionary period where computers were built in many different forms. There were computers designed for specific tasks. There were analog computers, and computers that could only be repurposed by changing the wiring. There were computers that worked with decimal instead of binary numbers. Today, almost all computers are the universal form that Turing envisioned. We even refer to them as “universal Turing machines.” With the right software, today’s computers can be applied to almost any task. Market forces decided that universal, general-purpose computers were the way to go. This is despite the fact that, even today, any particular task can be performed faster or with less power using a custom solution, such as a special chip. Product designers and engineers usually prefer the lower cost and convenience of general-purpose computers, even though a dedicated machine could be faster and use less power. A similar transition will occur with artificial intelligence. Today we are building dedicated AI systems that are the best at whatever task they are designed to do. But in the future, most intelligent machines will be universal: more like humans, capable of learning practically anything. Today’s computers come in many shapes and sizes, from the microcomputer in a toaster to room-size computers used for weather simulation. Despite their differences in size and speed, all these computers work on the same principles laid out by Turing and others many years ago. They are all instances of universal Turing machines. Similarly, intelligent machines of the future will come in many shapes and sizes, but almost all of them will work on a common set of principles. Most AI will be universal learning machines, similar to the brain. (Mathematicians have proven that there are some problems that cannot be solved, even in principle. Therefore, to be precise, there are no true “universal” solutions. But this is a highly theoretical idea and we don’t need to consider it for the purposes of this book.) Some AI researchers argue that today’s artificial neural networks are already universal. A neural network can be trained to play Go or drive a car. However, the same neural network can’t do both. Neural networks also have to be tweaked and modified in other ways to get them to perform a task. When I use the terms “universal” or “general-purpose,” I imagine something like ourselves: a machine that can learn to do many things without erasing its memory and starting over. There are two reasons AI will transition from the dedicated solutions we see today to more universal solutions that will dominate the future. The first is the same reason that universal computers won out over dedicated computers. Universal computers are ultimately more cost-effective, and this led to more rapid advances in the technology. As more and more people use the same designs, more effort is applied to enhancing the most popular designs and the ecosystems that support them, leading to rapid improvements in cost and performance. This was the underlying driver of the exponential increase in computing power that shaped industry and society in the latter part of the twentieth century. The second reason that AI will transition to universal solutions is that some of the most important future applications of machine intelligence will require the flexibility of universal solutions. These applications will need to handle unanticipated problems and devise novel solutions in a way that today’s dedicated deep learning machines cannot.

Hawkins reckon that Artificial Intelligence is still in the early phase of its development. Just like how computers were once specialized machines that could only be used for specific cases, AI systems are now still in a relatively rudimentary phase and can be deployed to very specific situations. What’s next is getting to a point where machines are able to learn practically anything and handle different tasks without needing to be trained or set-up by humans. You wouldn’t need to train an AI to learn how to play a new game, they simply would be capable of learning on its own.

Its hard for any of us to visualize what that kind of future would ever look like simply because it hasn’t happened yet. There are also many differing opinions from the experts. Many have said that Artificial Narrow Intelligence  (i.e the kind that focuses on 1 specific task such as an e-commerce recommendation engine) is probably most likely the one we can expect to see being dominant and widely used in our lifetimes. Artificial General Intelligence (more the kind like the Terminator), is where many experts differ and some expect that it will probably never be happen in our lifetimes.

Just like how computers relied on the exponential increase in the computing power of chips to truly become ubiquitous and universal, AI will need a similar catalyst.

A Thousand Brains, Jeff Hawkins;Richard Dawkins – 3

To be an expert in any domain requires having a good reference frame, a good map. Two people observing the same physical object will likely end up with similar maps. For example, it is hard to imagine how the brains of two people observing the same chair would arrange its features differently. But when thinking about concepts, two people starting with the same facts might end up with different reference frames. Recall the example of a list of historical facts. One person might arrange the facts on a timeline, and another might arrange them on a map. The same facts can lead to different models and different worldviews. Being an expert is mostly about finding a good reference frame to arrange facts and observations. Albert Einstein started with the same facts as his contemporaries. However, he found a better way to arrange them, a better reference frame, that permitted him to see analogies and make predictions that were surprising. What is most fascinating about Einstein’s discoveries related to special relativity is that the reference frames he used to make them were everyday objects. He thought about trains, people, and flashlights. He started with the empirical observations of scientists, such as the absolute speed of light, and used everyday reference frames to deduce the equations of special relativity. Because of this, almost anyone can follow his logic and understand how he made his discoveries. In contrast, Einstein’s general theory of relativity required reference frames based on mathematical concepts called field equations, which are not easily related to everyday objects. Einstein found this much harder to understand, as does pretty much everyone else. In 1978, when Vernon Mountcastle proposed that there was a common algorithm underlying all perception and cognition, it was hard to imagine what algorithm could be powerful enough and general enough to fit the requirement. It was hard to imagine a single process that could explain everything we think of as intelligence, from basic sensory perception to the highest and most admired forms of intellectual ability. It is now clear to me that the common cortical algorithm is based on reference frames. Reference frames provide the substrate for learning the structure of the world, where things are, and how they move and change. Reference frames can do this not just for the physical objects that we can directly sense, but also for objects we cannot see or feel and even for concepts that have no physical form. Your brain has 150,000 cortical columns. Each column is a learning machine. Each column learns a predictive model of its inputs by observing how they change over time. Columns don’t know what they are learning; they don’t know what their models represent. The entire enterprise and the resultant models are built on reference frames. The correct reference frame to understand how the brain works is reference frames.

How can one really improve the way one thinks? From what I’ve read so far, from this book and many others, the key is to first being able to cycle through different frameworks and mental models for every situation.

One example would be like in this excerpt, to simply use different reference frames to organise events, such as viewing them on a timeline or geographically on a map. Doing each of them would yield substantially different insights and inspirations.

Have a relationship problem with your family? That’s where experience comes in. Reference many different experiences (those that you have personally experienced and or those learned from others). Reference different cultural values (Asian vs Western), to understand different ways of thinking about the same situation.

I find one becomes most rigid when one subscribes to 1 model of thinking and views all world facts through that frame. I.e Going through life thinking that capitalism is the roof of all evil. I.e Going through life think that its your sole life duty to ensure your child succeeds in a certain way. There is nothing wrong with rigidly subscribing to 1 way to reference the world, but you’ll probably end up being disappointed, let down or be that stubborn old person in that corner since there are many varied outcomes and everyone has their own way of viewing things.

Experts learn to view things and reference facts in a domain where other people can’t. It also explains why cross functional teams from different domains working together can be more creative.

A Thousand Brains, Jeff Hawkins;Richard Dawkins – 2

Vision, I realized, is doing the same thing as touch. Patches of retina are analogous to patches of skin. Each patch of your retina sees only a small part of an entire object, in the same way that each patch of your skin touches only a small part of an object. The brain doesn’t process a picture; it starts with a picture on the back of the eye but then breaks it up into hundreds of pieces. It then assigns each piece to a location relative to the object being observed. Creating reference frames and tracking locations is not a trivial task. I knew it would take several different types of neurons and multiple layers of cells to make these calculations. Since the complex circuitry in every cortical column is similar, locations and reference frames must be universal properties of the neocortex. Each column in the neocortex—whether it represents visual input, tactile input, auditory input, language, or high-level thought—must have neurons that represent reference frames and locations. Up to that point, most neuroscientists, including me, thought that the neocortex primarily processed sensory input. What I realized that day is that we need to think of the neocortex as primarily processing reference frames. Most of the circuitry is there to create reference frames and track locations. Sensory input is of course essential. As I will explain in coming chapters, the brain builds models of the world by associating sensory input with locations in reference frames. Why are reference frames so important? What does the brain gain from having them? First, a reference frame allows the brain to learn the structure of something. A coffee cup is a thing because it is composed of a set of features and surfaces arranged relative to each other in space. Similarly, a face is a nose, eyes, and mouth arranged in relative positions. You need a reference frame to specify the relative positions and structure of objects. Second, by defining an object using a reference frame, the brain can manipulate the entire object at once. For example, a car has many features arranged relative to each other. Once we learn a car, we can imagine what it looks like from different points of view or if it were stretched in one dimension. To accomplish these feats, the brain only has to rotate or stretch the reference frame and all the features of the car rotate and stretch with it. Third, a reference frame is needed to plan and create movements. Say my finger is touching the front of my phone and I want to press the power button at the top. If my brain knows the current location of my finger and the location of the power button, then it can calculate the movement needed to get my finger from its current location to the desired new one. A reference frame relative to the phone is needed to make this calculation. Reference frames are used in many fields. Roboticists rely on them to plan the movements of a robot’s arm or body. Reference frames are also used in animated films to render characters as they move. A few people had suggested that reference frames might be needed for certain AI applications. But as far as I know, there had not been any significant discussion that the neocortex worked on reference frames, and that the function of most of the neurons in each cortical column is to create reference frames and track locations. Now it seems obvious to me.

Had to do a simple Wikipedia for this one. The normal Wikipedia explanation for Reference Frame was way too complex.

So the key message behind this is that setting reference frames is a universal property of your neocortex, whether it be used for touch, visual or audio. The parts of the brain that processes these sensory inputs are not that different. Key message is that they process reference frames.

So what are reference frames? Image seeing a ball rolling down a street. The houses behind it, the lights and the road itself are the reference frame. Without them, you won’t be able to actually see or recognise that the ball is rolling down.

And what does this mean? It means that if the different parts of our neocortex have a universal component or principle behind them, then we might be closer to understanding the common principle behind “intelligence”. Just like how DNA is a common strand among all of us that is a universal principle that yields wildly different results, we might be close to figuring out the “DNA” behind intelligence too.

A Thousand Brains, Jeff Hawkins;Richard Dawkins – 1

What Mountcastle says in these first three sentences is that the brain grew large over evolutionary time by adding new brain parts on top of old brain parts. The older parts control more primitive behaviors while the newer parts create more sophisticated ones. Hopefully this sounds familiar, as I discussed this idea in the previous chapter. However, Mountcastle goes on to say that while much of the brain got bigger by adding new parts on top of old parts, that is not how the neocortex grew to occupy 70 percent of our brain. The neocortex got big by making many copies of the same thing: a basic circuit. Imagine watching a video of our brain evolving. The brain starts small. A new piece appears at one end, then another piece appears on top of that, and then another piece is appended on top of the previous pieces. At some point, millions of years ago, a new piece appears that we now call the neocortex. The neocortex starts small, but then grows larger, not by creating anything new, but by copying a basic circuit over and over. As the neocortex grows, it gets larger in area but not in thickness. Mountcastle argued that, although a human neocortex is much larger than a rat or dog neocortex, they are all made of the same element—we just have more copies of that element.

The passages from this book are gonna be rather technical and heavy (i need to reread them a few times), so why not start with a simple nice brief passage?

Our brain grew akin to some form of evolution, over time, it simply copied basic elements that were already there and just grew bigger. This allowed us to engage in more complex tasks. Its not like the inner parts of our brains are anything that unique in the animal kingdom, we just have more copies of it.

Noise, Daniel Kahneman;Olivier Sibony;Cass R. Sunstein – 6

In sum, some people might insist that an advantage of a noisy system is that it will allow people to accommodate new and emerging values. As values change, and if judges are allowed to exercise discretion, they might begin to give, for example, lower sentences to those convicted of drug offenses or higher sentences to those convicted of rape. We have emphasized that if some judges are lenient and others are not, then there will be a degree of unfairness; similarly situated people will be treated differently. But unfairness might be tolerated if it allows room for novel or emerging social values. The problem is hardly limited to the criminal justice system or even to law. With respect to any number of policies, companies might decide to allow some flexibility in their judgments and decisions, even if doing so produces noise, because flexibility ensures that as new beliefs and values arise, they can change policies over time. We offer a personal example: when one of us joined a large consulting firm some years ago, the not-so-recent welcome pack he received specified the travel expenses for which he was allowed to claim reimbursement (“one phone call home on safe arrival; a pressing charge for a suit; tips for bellboys”). The rules were noise-free but clearly outdated (and sexist). They were soon replaced with standards that can evolve with the times. For example, expenses must now be “proper and reasonable.” The first answer to this defense of noise is simple: Some noise-reduction strategies do not run into this objection at all. If people use a shared scale grounded in an outside view, they can respond to changing values over time. In any event, noise-reduction efforts need not and should not be permanent. If such efforts take the form of firm rules, those who make them should be willing to make changes over time. They might revisit them annually. They might decide that because of new values, new rules are essential. In the criminal justice system, the rule makers might reduce sentences for certain crimes and increase them for others. They might decriminalize some activity altogether—and criminalize an activity that had previously been considered perfectly acceptable. But let’s step back. Noisy systems can make room for emerging moral values, and that can be a good thing. But in many spheres, it is preposterous to defend high levels of noise with this argument. Some of the most important noise-reduction strategies, such as aggregating judgments, do allow for emerging values. And if different customers, complaining of a malfunctioning laptop, are treated differently by a computer company, the inconsistency is unlikely to be because of emerging values. If different people get different medical diagnoses, it is rarely because of new moral values. We can do a great deal to reduce noise or even eliminate it while still designing processes to allow values to evolve.

This will be the last excerpt shared from this book, and I think is a good summary of the overarching message of this book.

Humans, innovation and the development of new values are inherently messy. There remains an eternal conflict between existing legacy values and new ones. For us to truly accommodate new values, we do need a certain level of noise in our systems.

Nonetheless, this cannot be a defense against noise reduction, especially in sectors such as healthcare where the cost of inconsistency can be huge. Flexibility cannot be used as an excuse for inconsistency.

Noise, Daniel Kahneman;Olivier Sibony;Cass R. Sunstein – 5

The potentially high costs of noise reduction often come up in the context of algorithms, where there are growing objections to “algorithmic bias.” As we have seen, algorithms eliminate noise and often seem appealing for that reason. Indeed, much of this book might be taken as an argument for greater reliance on algorithms, simply because they are noiseless. But as we have also seen, noise reduction can come at an intolerable cost if greater reliance on algorithms increases discrimination on the basis of race and gender, or against members of disadvantaged groups. There are widespread fears that algorithms will in fact have that discriminatory consequence, which is undoubtedly a serious risk. In Weapons of Math Destruction, mathematician Cathy O’Neil urges that reliance on big data and decision by algorithm can embed prejudice, increase inequality, and threaten democracy itself. According to another skeptical account, “potentially biased mathematical models are remaking our lives—and neither the companies responsible for developing them nor the government is interested in addressing the problem.” According to ProPublica, an independent investigative journalism organization, COMPAS, an algorithm widely used in recidivism risk assessments, is strongly biased against members of racial minorities. No one should doubt that it is possible—even easy—to create an algorithm that is noise-free but also racist, sexist, or otherwise biased. An algorithm that explicitly uses the color of a defendant’s skin to determine whether that person should be granted bail would discriminate (and its use would be unlawful in many nations). An algorithm that takes account of whether job applicants might become pregnant would discriminate against women. In these and other cases, algorithms could eliminate unwanted variability in judgment but also embed unacceptable bias. In principle, we should be able to design an algorithm that does not take account of race or gender. Indeed, an algorithm could be designed that disregards race or gender entirely. The more challenging problem, now receiving a great deal of attention, is that an algorithm could discriminate and, in that sense, turn out to be biased, even when it does not overtly use race and gender as predictors.

Can algorithms provide a fairer way to make judgements and decisions? Humans are prone to bias and we tend to be slaves to our emotions by the moment. All that depends on how you define fair.

If you aim to be logically consistent all the time, using algorithms to make decisions can perpetuate widespread discrimination and takes the individual element out of things. While we all already tend to stereotype and make assumptions based on certain visual characteristics, algorithms sets that in stone.

If you do want to allow for each situation to have its own judgement where context is taken into place, inefficiencies and discrepancies are inevitable. What then matters is the way we measure and level of discrepancies that we are willing to accept.

While it might seem that the book is saying that humans are guilty of noise, purely resorting to algorithms and rules will cause many people to be discriminated even more heavily without consideration for context. Now that seems like your dystopian fiction come true.

Noise, Daniel Kahneman;Olivier Sibony;Cass R. Sunstein – 4

The only measure of cognitive style or personality that they found to predict forecasting performance was another scale, developed by psychology professor Jonathan Baron to measure “actively open-minded thinking.” To be actively open-minded is to actively search for information that contradicts your preexisting hypotheses. Such information includes the dissenting opinions of others and the careful weighing of new evidence against old beliefs. Actively openminded people agree with statements like this: “Allowing oneself to be convinced by an opposing argument is a sign of good character.” They disagree with the proposition that “changing your mind is a sign of weakness” or that “intuition is the best guide in making decisions.” In other words, while the cognitive reflection and need for cognition scores measure the propensity to engage in slow and careful thinking, actively open-minded thinking goes beyond that. It is the humility of those who are constantly aware that their judgment is a work in progress and who yearn to be corrected. We will see in chapter 21 that this thinking style characterizes the very best forecasters, who constantly change their minds and revise their beliefs in response to new information. Interestingly, there is some evidence that actively open-minded thinking is a teachable skill. We do not aim here to draw hard-and-fast conclusions about how to pick individuals who will make good judgments in a given domain. But two general principles emerge from this brief review. First, it is wise to recognize the difference between domains in which expertise can be confirmed by comparison with true values (such as weather forecasting) and domains that are the province of respect-experts. A political analyst may sound articulate and convincing, and a chess grandmaster may sound timid and unable to explain the reasoning behind some of his moves. Yet we probably should treat the professional judgment of the former with more skepticism than that of the latter. Second, some judges are going to be better than their equally qualified and experienced peers. If they are better, they are less likely to be biased or noisy. Among many things that explain these differences, intelligence and cognitive style matter. Although no single measure or scale unambiguously predicts judgment quality, you may want to look for the sort of people who actively search for new information that could contradict their prior beliefs, who are methodical in integrating that information into their current perspective, and who are willing, even eager, to change their minds as a result. The personality of people with excellent judgment may not fit the generally accepted stereotype of a decisive leader. People often tend to trust and like leaders who are firm and clear and who seem to know, immediately and deep in their bones, what is right. Such leaders inspire confidence. But the evidence suggests that if the goal is to reduce error, it is better for leaders (and others) to remain open to counterarguments and to know that they might be wrong. If they end up being decisive, it is at the end of a process, not at the start.

Making right judgements inherently requires someone to be indecisive and to find fault in their own initial judgements. However, we tend to be attracted to people who make bold daring claims on the future or on situations. That blind confidences equals leadership material.

While it might be uninspiring to follow someone who can’t make up their mind, it might also not be the wisest to follow someone whom seemingly has it all figured out. This is especially when it comes to predicting the future. The best way to ensure the highest accuracy in judgement calls is to constantly question yourself and be your own devil’s advocate.

Changing your mind and being actively open-minded is good if not done in excess, and we should probably be more accepting of that if you truly want to follow a leader that has good judgement.