yourdailytao@gmail.com

Radical Uncertainty, Mervyn King;John Kay – 3

Tetlock’s assessment of the accuracy of historical forecasts provides useful insight into what characterises reliable and unreliable predictors. Few readers will be surprised that Tetlock learnt from his initial work that the forecasters in his sample were not very good; little better than a chimpanzee throwing darts. What is, perhaps, most surprising is that he found that the principal factor differentiating the good from the bad was how well known the forecaster was. The more prominent the individual concerned, the more often the forecaster is reported by the media, the more frequently consulted by politicians and business leaders, the less credence should be placed on that individual’s prognostications. Tetlock’s intriguing explanation draws on the distinction, first made by the Greek poet Archilochus, developed by Tolstoy and subsequently popularised by Isaiah Berlin, between the ‘hedgehog’ and the ‘fox’. The hedgehog knows one big thing, the fox many little things. The hedgehog subscribes to some overarching narrative; the fox is sceptical about the power of any overarching narrative. The hedgehog approaches most uncertainties with strong priors; the fox attempts to assemble evidence before forming a view of ‘what is going on here’. We both have the experience of dealing with researchers for radio and television programmes: if you profess an opinion that is unambiguous and – for preference – extreme, a car will be on its way to take you to the studio; if you suggest that the issue is complicated, they will thank you for your advice and offer to ring you back. They rarely do. People understandably like clear opinions but the truth is that many issues inescapably involve saying ‘on the one hand, but on the other’. The world benefits from both hedgehogs and foxes. Winston Churchill and Steve Jobs were hedgehogs, but if you are looking for accurate forecasts you will do better to employ foxes. Tetlock’s current good judgement project, intended to create teams who are not only good at forecasting but who become better with experience, is designed to educate foxes.

The kind of behaviour and personalities that simplify issues and make clear 1-sided opinions tend to make for better TV and entertainment. Unfortunately, we are all predisposed towards preferring simple and easily understood narratives.

This tendency manifests itself in the kind of influencers and the content we see on social media today. Whether it be a do step 1, 2 and 3 and get rich  or a “us against them” or another “rags to riches” kind of narrative. We tend to subscribe to these ideas because they easily imprint on our minds, its easier to buy in and you don’t have to wrestle against contradicting ideas in your head.

The above might be good for clarity, but wouldn’t be helpful if you need accuracy. If you’ll like to find out the correct judgmenets, then people whom might superficially appear to be indecisive or slow in committing might actually be the one’s whom advice you should eventually take.

Its also why I always take advice with a pinch of salt from someone who is “so sure” about things. Or from seniors who tell you that, unequivocally, that there is only 1 way to succeed and their path is the only one you should take.

Radical Uncertainty, Mervyn King;John Kay – 2

Steve Jobs was not watching a Bayesian dial: he was waiting until he recognised ‘the next big thing’. And Winston Churchill also played a waiting game as he saw the United States gradually dragged into war – and did his utmost to accelerate American entry. We do not know whether Obama walked into the fateful meeting with a prior probability in his mind: we hope not. He sat and listened to conflicting accounts and evidence until he felt he had enough information – knowing that he could expect only limited and imperfect information – to make a decision. That is how good decisions are made in a world of radical uncertainty, as decision-makers wrestle with the question ‘What is going on here?’ In contrast, bank executives relied on the judgements of their risk professionals, who in turn relied on Bayesian techniques, and the results were not encouraging. Woodford’s students, even though they were familiar with the principles of Bayesian reasoning, did not approach their task in this way – even though the experiment was designed to stimulate them to do so. Woodford’s students were not making bad decisions. They simply did not use Bayesian reasoning to process new information. An alternative interpretation of the experimental results is that the students were developing a sequence of narratives, and challenging and revising the narrative at discrete intervals as they went along. Far from being systematically biased, the students were systematically struggling to come to terms with radical uncertainty in the manner in which thoughtful people normally come to terms with it. (Or, perhaps, waiting for the session to end and to collect their $10.) When we express doubt about the practical relevance of the Bayesian dial, we are not for a moment suggesting that people should not modify their views in the light of new information. We think they should manage radical uncertainty as President Obama did – listening to evidence, hearing pros and cons, inviting challenges to the prevailing narrative, and finally reaching a considered decision. And Obama might have been forced, as Carter had been, to change his decision when he learnt of problems in the execution of the agreed plan which had not been anticipated. In the fortunate event this proved unnecessary.

What do you do when you have to make decisions when there are no or false probabilities that you can consult from? Steve Jobs certainly didn’t calculate probabilities before he chose to venture into PCs. If you only rely on making decisions based on maximum probability of success, it might actually prevent you from making the most meaningful life decisions you can make. The entrepreneur who only assesses whether to begin a venture based on probability probably wouldn’t do so as the numbers wouldn’t check out.

Assessing important life decisions based on probability should always be a consideration, and can help us be aware of the risks. However, it should just be a guideline and never your only principle. Given that we are unable to assess real probability due to the vast amount of uncertainties, relying on numbers simply give you a “false sense of precision”.

Radical Uncertainty, Mervyn King;John Kay – 1

The crisis of 2007–08 represented – obviously – a failure of economic analysis and economic policy. But while recognising the seriousness and cost of the financial crisis, economists have generally been reluctant to accept that their intellectual framework is in need of revision. Economists (used to) distinguish risk, by which they meant unknowns which could be described with probabilities, from uncertainty, which could not. They had already adopted mathematical techniques which gave the term ‘risk’ a different meaning from that of everyday usage. In this book we will describe the considerable confusion and economic damage which has arisen as a result of the failure to recognise that the terms ‘risk’, ‘uncertainty’ and ‘rationality’ have acquired technical meanings in economics which do not correspond to the everyday use of these words. And over the last century economists have attempted to elide that historic distinction between risk and uncertainty, and to apply probabilities to every instance of our imperfect knowledge of the future. The difference between risk and uncertainty was the subject of lively debate in the inter-war period. Two great economists – Frank Knight in Chicago and John Maynard Keynes in Cambridge, England – argued forcefully for the continued importance of the distinction. Knight observed that ‘a measurable uncertainty, or “risk” proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all’. Keynes made a similar distinction. In an article summarising his magnum opus, The General Theory of Employment, Interest and Money , he wrote: By ‘uncertain’ knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty; nor is the prospect of a Victory bond being drawn. Or, again, the expectation of life is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth-owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know. The title of this book, and its central concept, is radical uncertainty . Uncertainty is the result of our incomplete knowledge of the world, or about the connection between our present actions and their future outcomes. Depending on the nature of the uncertainty, such incomplete knowledge may be distressing or pleasurable. I am fearful of the sentence the judge will impose, but look forward to new experiences on my forthcoming holiday. We might sometimes wish we had perfect foresight, so that nothing the future might hold could surprise us, but a little reflection will tell us that such a world would be a dull place. We have chosen to replace the distinction between risk and uncertainty deployed by Knight and Keynes with a distinction between resolvable and radical uncertainty. Resolvable uncertainty is uncertainty which can be removed by looking something up (I am uncertain which city is the capital of Pennsylvania) or which can be represented by a known probability distribution of outcomes (the spin of a roulette wheel). With radical uncertainty, however, there is no similar means of resolving the uncertainty – we simply do not know. Radical uncertainty has many dimensions: obscurity; ignorance; vagueness; ambiguity; ill-defined problems; and a lack of information that in some cases but not all we might hope to rectify at a future date. These aspects of uncertainty are the stuff of everyday experience.

Wanted to cover more passages of the previous book. After some conversations though, I couldn’t really find any new fresh insights beyond the picture of the framework that I had already posted.

One thing that struck me was that most books, for commercial purposes, try to jam fit complex and intricate narratives into 1 model or framework of thinking. After all, simplicity sells. Think of books like grit, vulnerability, good to great. Books with a simple common narrative that works onto a Ted Talk. The success of an idea usually doesn’t lie in how accurate it is, but how easy it sticks. So as we move to cover new books, do always consider the above and never let 1 single narrative dictate your thoughts. After all, multi-model thinking usually leads to much more accurate judgements and predictions.

Moving on, this book is about Radical Uncertainty, where historical data offers zero guidance in helping one predict future outcomes. One example in the book was about Obama’s decision to send in the Navy Seals to capture Osama Bin Laden. He had no historical data to refer to, estimated probabilities from his analysts and ultimately had to make a judgement call. This book is about learning how to deal with this.

The Narrow Corridor, Daron Acemoglu;James A. Robinson – 1

This book is about liberty. Liberty depends on the different types of Leviathans and their evolution—whether a society will live without an effective state, put up with a despotic one, or manage to forge a balance of power that opens the way for the emergence of a Shackled Leviathan and the gradual flourishing of liberty. In contrast to Hobbes’s vision of society submitting its will to the Leviathan, which much of social science and the modern world order take for granted, it is fundamental to our theory that Leviathans are not always welcomed with open arms and their path is a rocky one, to say the least. In many instances society will resist their ascendancy and will do so successfully, just like the Tiv did and the Lebanese still do. The result of this resistance is illiberty. When this resistance crumbles, we may end up with a Despotic Leviathan, which looks a lot like the sea monster that Hobbes imagined. But this Leviathan, though it prevents Warre, does not necessarily make its subjects’ lives much richer than the “nasty, brutish, and short” existence that people eke out under the Absent Leviathan. Nor do its subjects really “submit their wills” to the Leviathan—any more than East Europeans chanting the “Internationale” in the streets before the collapse of the Berlin Wall really submitted their wills to Soviet Russia. The implications for citizens are different in some ways, but still there is no liberty. A very different type of Leviathan, a shackled one, emerges when there is a balance between its power and society’s capacity to control it. This is the Leviathan that can resolve conflicts fairly, provide public services and economic opportunities, and prevent dominance, laying down the basic foundations of liberty. This is the Leviathan that people, believing that they can control it, trust and cooperate with and allow to increase its capacity. This is the Leviathan that also promotes liberty by breaking down the various cages of norms tightly regulating behavior in society. But in a fundamental sense this is not a Hobbesian Leviathan. Its defining feature is its shackles: it does not have Hobbes’s sea monster’s dominance over society; it does not have the capability to ignore or silence people when they try to influence political decision making. It stands not above but alongside society.

Thought we begin with a diagram of the key framework used in this book. These are the same authors behind “Why Nations Fail”, the perfect book for your home bookshelf display.

Putting snarkiness aside, this book is a sequel on their previous book on extractive vs inclusive institutions with a different kind of framework. What makes liberal societies succeed? There is a right balance between the power of society vs the power of the state. The “Narrow Corridor”, in which liberty flourishes lies on a delicate balance between the power of society vs that of the state. The key point being that is the state is too powerful, then it risks becoming a despotic society. On the other hand, if citizens have too much power, then the state will not be able to act effectively on long-term plans that might be unpopular.

The key framework of the book can be simply explained by the graphic. I’ll be looking through the notes and sharing some of the key historical examples and other explanations by the author(s).

 

A Thousand Brains, Jeff Hawkins;Richard Dawkins – 5

Genes are just molecules that replicate. As genes evolve, they are not heading in any particular direction, nor is one gene intrinsically better than another, just as one molecule is not intrinsically better than any other molecule. Some genes may be better at replication, yet, as environments change, which genes are better at replicating also changes. Importantly, there is no overall direction to the changes. Life based on genes has no direction or goal. Life may manifest itself as a virus, a single-celled bacterium, or a tree. But there doesn’t appear to be any reason to suggest one life-form is better than another, beyond its ability to replicate. Knowledge is different. Knowledge has both a direction and an end goal. For example, consider gravity. In the not-too-distant past, nobody had any idea why things fell down and not up. Newton created the first successful theory of gravity. He proposed that it is a universal force, and he showed that it behaves according to a set of simple laws that could be expressed mathematically. After Newton, we would never go back to having no theory of gravity. Einstein’s explanation of gravity is better than Newton’s, and we will never go back to Newton’s theory. It wasn’t that Newton was wrong. His equations still accurately describe gravity as we experience it every day. Einstein’s theory incorporates Newton’s but better describes gravity under unusual conditions. There is a direction to knowledge. Knowledge of gravity can go from no knowledge, to Newton’s, to Einstein’s, but it can’t go in the opposite direction. In addition to a direction, knowledge has an end goal. The earliest human explorers did not know how big the Earth was. No matter how far they traveled, there was always more. Was the Earth infinite? Did it end with an edge where further travel would cause you to fall off? Nobody knew. But there was an end goal. It was assumed that there was an answer to the question, How big is the Earth? We eventually achieved that goal with a surprising answer. The Earth is a sphere, and now we know how big the Earth is. We are facing similar mysteries today. How big is the universe? Does it go on forever? Does it have an edge? Does it wrap around on itself like the Earth? Are there many universes? There are plenty of other things we don’t understand: What is time? How did life originate? How common is intelligent life? Answering these questions is a goal, and history suggests we can achieve it. A future driven by genes has little to no direction and only short-term goals: stay healthy, have kids, enjoy life. A future designed in the best interest of knowledge has both direction and end goals. The good news is we don’t have to choose one future over the other. It is possible to do both. We can continue to live on Earth, doing our best to keep it livable and trying to protect ourselves from our own worst behaviors. And we can simultaneously dedicate resources to ensuring the preservation of knowledge and the continuation of intelligence for a time in the future when we are no longer here.

Really liked the part where he said that knowledge has a direction, and that it can only move forward. Unlike genes, which goal is to self-replicate, knowledge has an end goal and that might be what many of us can get behind.

Final excerpt from this book, really liked this as an the end to a book about intelligence and how we need to see it as something that needs to be preserved.

A Thousand Brains, Jeff Hawkins;Richard Dawkins – 4

The situation we are in today reminds me of the early days of computing. The word “computer” originally referred to people whose job was to perform mathematical calculations. To create numeric tables or to decode encrypted messages, dozens of human computers would do the necessary calculations by hand. The very first electronic computers were designed to replace human computers for a specific task. For example, the best automated solution for message decryption was a machine that only decrypted messages. Computing pioneers such as Alan Turing argued that we should build “universal” computers: electronic machines that could be programmed to do any task. However, at that time, no one knew the best way to build such a computer. There was a transitionary period where computers were built in many different forms. There were computers designed for specific tasks. There were analog computers, and computers that could only be repurposed by changing the wiring. There were computers that worked with decimal instead of binary numbers. Today, almost all computers are the universal form that Turing envisioned. We even refer to them as “universal Turing machines.” With the right software, today’s computers can be applied to almost any task. Market forces decided that universal, general-purpose computers were the way to go. This is despite the fact that, even today, any particular task can be performed faster or with less power using a custom solution, such as a special chip. Product designers and engineers usually prefer the lower cost and convenience of general-purpose computers, even though a dedicated machine could be faster and use less power. A similar transition will occur with artificial intelligence. Today we are building dedicated AI systems that are the best at whatever task they are designed to do. But in the future, most intelligent machines will be universal: more like humans, capable of learning practically anything. Today’s computers come in many shapes and sizes, from the microcomputer in a toaster to room-size computers used for weather simulation. Despite their differences in size and speed, all these computers work on the same principles laid out by Turing and others many years ago. They are all instances of universal Turing machines. Similarly, intelligent machines of the future will come in many shapes and sizes, but almost all of them will work on a common set of principles. Most AI will be universal learning machines, similar to the brain. (Mathematicians have proven that there are some problems that cannot be solved, even in principle. Therefore, to be precise, there are no true “universal” solutions. But this is a highly theoretical idea and we don’t need to consider it for the purposes of this book.) Some AI researchers argue that today’s artificial neural networks are already universal. A neural network can be trained to play Go or drive a car. However, the same neural network can’t do both. Neural networks also have to be tweaked and modified in other ways to get them to perform a task. When I use the terms “universal” or “general-purpose,” I imagine something like ourselves: a machine that can learn to do many things without erasing its memory and starting over. There are two reasons AI will transition from the dedicated solutions we see today to more universal solutions that will dominate the future. The first is the same reason that universal computers won out over dedicated computers. Universal computers are ultimately more cost-effective, and this led to more rapid advances in the technology. As more and more people use the same designs, more effort is applied to enhancing the most popular designs and the ecosystems that support them, leading to rapid improvements in cost and performance. This was the underlying driver of the exponential increase in computing power that shaped industry and society in the latter part of the twentieth century. The second reason that AI will transition to universal solutions is that some of the most important future applications of machine intelligence will require the flexibility of universal solutions. These applications will need to handle unanticipated problems and devise novel solutions in a way that today’s dedicated deep learning machines cannot.

Hawkins reckon that Artificial Intelligence is still in the early phase of its development. Just like how computers were once specialized machines that could only be used for specific cases, AI systems are now still in a relatively rudimentary phase and can be deployed to very specific situations. What’s next is getting to a point where machines are able to learn practically anything and handle different tasks without needing to be trained or set-up by humans. You wouldn’t need to train an AI to learn how to play a new game, they simply would be capable of learning on its own.

Its hard for any of us to visualize what that kind of future would ever look like simply because it hasn’t happened yet. There are also many differing opinions from the experts. Many have said that Artificial Narrow Intelligence  (i.e the kind that focuses on 1 specific task such as an e-commerce recommendation engine) is probably most likely the one we can expect to see being dominant and widely used in our lifetimes. Artificial General Intelligence (more the kind like the Terminator), is where many experts differ and some expect that it will probably never be happen in our lifetimes.

Just like how computers relied on the exponential increase in the computing power of chips to truly become ubiquitous and universal, AI will need a similar catalyst.

A Thousand Brains, Jeff Hawkins;Richard Dawkins – 3

To be an expert in any domain requires having a good reference frame, a good map. Two people observing the same physical object will likely end up with similar maps. For example, it is hard to imagine how the brains of two people observing the same chair would arrange its features differently. But when thinking about concepts, two people starting with the same facts might end up with different reference frames. Recall the example of a list of historical facts. One person might arrange the facts on a timeline, and another might arrange them on a map. The same facts can lead to different models and different worldviews. Being an expert is mostly about finding a good reference frame to arrange facts and observations. Albert Einstein started with the same facts as his contemporaries. However, he found a better way to arrange them, a better reference frame, that permitted him to see analogies and make predictions that were surprising. What is most fascinating about Einstein’s discoveries related to special relativity is that the reference frames he used to make them were everyday objects. He thought about trains, people, and flashlights. He started with the empirical observations of scientists, such as the absolute speed of light, and used everyday reference frames to deduce the equations of special relativity. Because of this, almost anyone can follow his logic and understand how he made his discoveries. In contrast, Einstein’s general theory of relativity required reference frames based on mathematical concepts called field equations, which are not easily related to everyday objects. Einstein found this much harder to understand, as does pretty much everyone else. In 1978, when Vernon Mountcastle proposed that there was a common algorithm underlying all perception and cognition, it was hard to imagine what algorithm could be powerful enough and general enough to fit the requirement. It was hard to imagine a single process that could explain everything we think of as intelligence, from basic sensory perception to the highest and most admired forms of intellectual ability. It is now clear to me that the common cortical algorithm is based on reference frames. Reference frames provide the substrate for learning the structure of the world, where things are, and how they move and change. Reference frames can do this not just for the physical objects that we can directly sense, but also for objects we cannot see or feel and even for concepts that have no physical form. Your brain has 150,000 cortical columns. Each column is a learning machine. Each column learns a predictive model of its inputs by observing how they change over time. Columns don’t know what they are learning; they don’t know what their models represent. The entire enterprise and the resultant models are built on reference frames. The correct reference frame to understand how the brain works is reference frames.

How can one really improve the way one thinks? From what I’ve read so far, from this book and many others, the key is to first being able to cycle through different frameworks and mental models for every situation.

One example would be like in this excerpt, to simply use different reference frames to organise events, such as viewing them on a timeline or geographically on a map. Doing each of them would yield substantially different insights and inspirations.

Have a relationship problem with your family? That’s where experience comes in. Reference many different experiences (those that you have personally experienced and or those learned from others). Reference different cultural values (Asian vs Western), to understand different ways of thinking about the same situation.

I find one becomes most rigid when one subscribes to 1 model of thinking and views all world facts through that frame. I.e Going through life thinking that capitalism is the roof of all evil. I.e Going through life think that its your sole life duty to ensure your child succeeds in a certain way. There is nothing wrong with rigidly subscribing to 1 way to reference the world, but you’ll probably end up being disappointed, let down or be that stubborn old person in that corner since there are many varied outcomes and everyone has their own way of viewing things.

Experts learn to view things and reference facts in a domain where other people can’t. It also explains why cross functional teams from different domains working together can be more creative.

A Thousand Brains, Jeff Hawkins;Richard Dawkins – 2

Vision, I realized, is doing the same thing as touch. Patches of retina are analogous to patches of skin. Each patch of your retina sees only a small part of an entire object, in the same way that each patch of your skin touches only a small part of an object. The brain doesn’t process a picture; it starts with a picture on the back of the eye but then breaks it up into hundreds of pieces. It then assigns each piece to a location relative to the object being observed. Creating reference frames and tracking locations is not a trivial task. I knew it would take several different types of neurons and multiple layers of cells to make these calculations. Since the complex circuitry in every cortical column is similar, locations and reference frames must be universal properties of the neocortex. Each column in the neocortex—whether it represents visual input, tactile input, auditory input, language, or high-level thought—must have neurons that represent reference frames and locations. Up to that point, most neuroscientists, including me, thought that the neocortex primarily processed sensory input. What I realized that day is that we need to think of the neocortex as primarily processing reference frames. Most of the circuitry is there to create reference frames and track locations. Sensory input is of course essential. As I will explain in coming chapters, the brain builds models of the world by associating sensory input with locations in reference frames. Why are reference frames so important? What does the brain gain from having them? First, a reference frame allows the brain to learn the structure of something. A coffee cup is a thing because it is composed of a set of features and surfaces arranged relative to each other in space. Similarly, a face is a nose, eyes, and mouth arranged in relative positions. You need a reference frame to specify the relative positions and structure of objects. Second, by defining an object using a reference frame, the brain can manipulate the entire object at once. For example, a car has many features arranged relative to each other. Once we learn a car, we can imagine what it looks like from different points of view or if it were stretched in one dimension. To accomplish these feats, the brain only has to rotate or stretch the reference frame and all the features of the car rotate and stretch with it. Third, a reference frame is needed to plan and create movements. Say my finger is touching the front of my phone and I want to press the power button at the top. If my brain knows the current location of my finger and the location of the power button, then it can calculate the movement needed to get my finger from its current location to the desired new one. A reference frame relative to the phone is needed to make this calculation. Reference frames are used in many fields. Roboticists rely on them to plan the movements of a robot’s arm or body. Reference frames are also used in animated films to render characters as they move. A few people had suggested that reference frames might be needed for certain AI applications. But as far as I know, there had not been any significant discussion that the neocortex worked on reference frames, and that the function of most of the neurons in each cortical column is to create reference frames and track locations. Now it seems obvious to me.

Had to do a simple Wikipedia for this one. The normal Wikipedia explanation for Reference Frame was way too complex.

So the key message behind this is that setting reference frames is a universal property of your neocortex, whether it be used for touch, visual or audio. The parts of the brain that processes these sensory inputs are not that different. Key message is that they process reference frames.

So what are reference frames? Image seeing a ball rolling down a street. The houses behind it, the lights and the road itself are the reference frame. Without them, you won’t be able to actually see or recognise that the ball is rolling down.

And what does this mean? It means that if the different parts of our neocortex have a universal component or principle behind them, then we might be closer to understanding the common principle behind “intelligence”. Just like how DNA is a common strand among all of us that is a universal principle that yields wildly different results, we might be close to figuring out the “DNA” behind intelligence too.

A Thousand Brains, Jeff Hawkins;Richard Dawkins – 1

What Mountcastle says in these first three sentences is that the brain grew large over evolutionary time by adding new brain parts on top of old brain parts. The older parts control more primitive behaviors while the newer parts create more sophisticated ones. Hopefully this sounds familiar, as I discussed this idea in the previous chapter. However, Mountcastle goes on to say that while much of the brain got bigger by adding new parts on top of old parts, that is not how the neocortex grew to occupy 70 percent of our brain. The neocortex got big by making many copies of the same thing: a basic circuit. Imagine watching a video of our brain evolving. The brain starts small. A new piece appears at one end, then another piece appears on top of that, and then another piece is appended on top of the previous pieces. At some point, millions of years ago, a new piece appears that we now call the neocortex. The neocortex starts small, but then grows larger, not by creating anything new, but by copying a basic circuit over and over. As the neocortex grows, it gets larger in area but not in thickness. Mountcastle argued that, although a human neocortex is much larger than a rat or dog neocortex, they are all made of the same element—we just have more copies of that element.

The passages from this book are gonna be rather technical and heavy (i need to reread them a few times), so why not start with a simple nice brief passage?

Our brain grew akin to some form of evolution, over time, it simply copied basic elements that were already there and just grew bigger. This allowed us to engage in more complex tasks. Its not like the inner parts of our brains are anything that unique in the animal kingdom, we just have more copies of it.

Noise, Daniel Kahneman;Olivier Sibony;Cass R. Sunstein – 6

In sum, some people might insist that an advantage of a noisy system is that it will allow people to accommodate new and emerging values. As values change, and if judges are allowed to exercise discretion, they might begin to give, for example, lower sentences to those convicted of drug offenses or higher sentences to those convicted of rape. We have emphasized that if some judges are lenient and others are not, then there will be a degree of unfairness; similarly situated people will be treated differently. But unfairness might be tolerated if it allows room for novel or emerging social values. The problem is hardly limited to the criminal justice system or even to law. With respect to any number of policies, companies might decide to allow some flexibility in their judgments and decisions, even if doing so produces noise, because flexibility ensures that as new beliefs and values arise, they can change policies over time. We offer a personal example: when one of us joined a large consulting firm some years ago, the not-so-recent welcome pack he received specified the travel expenses for which he was allowed to claim reimbursement (“one phone call home on safe arrival; a pressing charge for a suit; tips for bellboys”). The rules were noise-free but clearly outdated (and sexist). They were soon replaced with standards that can evolve with the times. For example, expenses must now be “proper and reasonable.” The first answer to this defense of noise is simple: Some noise-reduction strategies do not run into this objection at all. If people use a shared scale grounded in an outside view, they can respond to changing values over time. In any event, noise-reduction efforts need not and should not be permanent. If such efforts take the form of firm rules, those who make them should be willing to make changes over time. They might revisit them annually. They might decide that because of new values, new rules are essential. In the criminal justice system, the rule makers might reduce sentences for certain crimes and increase them for others. They might decriminalize some activity altogether—and criminalize an activity that had previously been considered perfectly acceptable. But let’s step back. Noisy systems can make room for emerging moral values, and that can be a good thing. But in many spheres, it is preposterous to defend high levels of noise with this argument. Some of the most important noise-reduction strategies, such as aggregating judgments, do allow for emerging values. And if different customers, complaining of a malfunctioning laptop, are treated differently by a computer company, the inconsistency is unlikely to be because of emerging values. If different people get different medical diagnoses, it is rarely because of new moral values. We can do a great deal to reduce noise or even eliminate it while still designing processes to allow values to evolve.

This will be the last excerpt shared from this book, and I think is a good summary of the overarching message of this book.

Humans, innovation and the development of new values are inherently messy. There remains an eternal conflict between existing legacy values and new ones. For us to truly accommodate new values, we do need a certain level of noise in our systems.

Nonetheless, this cannot be a defense against noise reduction, especially in sectors such as healthcare where the cost of inconsistency can be huge. Flexibility cannot be used as an excuse for inconsistency.