yourdailytao@gmail.com

Daily Tao – Nicholas Carr, The Glass Cage: Automation and Us – 7

One of the most remarkable things about us is also one of the easiest to overlook: each time we collide with the real, we deepen our understanding of the world and become more fully a part of it. While we’re wrestling with a challenge, we may be motivated by an anticipation of the ends of our labor, but, as Frost saw, it’s the work—the means—that makes us who we are. Automation severs ends from means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face the same existential question that the Shushwap confronted: Does our essence still lie in what we know, or are we now content to be defined by what we want? That sounds very serious. But the aim is joy. The active soul is a light soul. By reclaiming our tools as parts of ourselves, as instruments of experience rather than just means of production, we can enjoy the freedom that congenial technology provides when it opens the world more fully to us. It’s the freedom I imagine Lawrence Sperry and Emil Cachin must have felt on that bright spring day in Paris a hundred years ago when they climbed out onto the wings of their gyroscope-balanced Curtiss C-2 biplane and, filled with terror and delight, passed over the reviewing stands and saw below them the faces of the crowd turned skyward in awe.

Final passage from this book. The key question is whether automation free us up to do things that really give us meaning, or that losing the element of “work” and “challenge” of knowing and doing things makes us feel lost instead.

Humans can be paradoxical in what we want. When we are busy and being challenged, we might yearn for leisure time and to just do nothing. Yet, when we are fully relaxed and not doing anything, there remains an emptiness or a search for meaning. Now, most modern jobs might not be the answer to our search for meaning, so automating such jobs away might not necessarily be the worst thing (probably not from an economic point of view). It remains to be seen how this modern disruption will continue to impact our perspectives and our sense of purpose.

Daily Tao – Nicholas Carr, The Glass Cage: Automation and Us – 6

Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely, if ever, accept such a trade-off. The main reason they invest in automation, after all, is to reduce labor costs and streamline operations. As individuals, too, we almost always seek efficiency and convenience when we decide which software application or computing device to use. We pick the program or gadget that lightens our load and frees up our time, not the one that makes us work harder and longer. Technology companies naturally cater to such desires when they design their wares. They compete fiercely to offer the product that requires the least effort and thought to use.

Learning requires inefficiency at the short term, and sometimes, there is not much of a rational incentive for businesses to invest in learning for their employees at the cost of short-term inefficiency. Such a perspective might be a tad short-sighted, but most businesses do tend to focus on the short-term over long-term.

Even from a personal career perspective, learning might be counter-productive to our own career goals in some instances. Wanting to round up your skill set (i.e. working in another department) might compel you to take a lateral career move (or even backwards) financially. We might convince ourselves it is better for the long-term, but any short-term loss in earnings weighs heavily on our minds and creates a huge inertia for change.

Daily Tao – Nicholas Carr, The Glass Cage: Automation and Us – 5

Falling victim to the substitution myth, the RAND researchers did not sufficiently account for the possibility that electronic records would have ill effects along with beneficial ones—a problem that plagues many forecasts about the consequences of automation. The overly optimistic analysis led to overly optimistic policy. As the physicians and medical professors Jerome Groopman and Pamela Hartzband noted in a withering critique of the Obama administration’s subsidies, the 2005 RAND report “essentially ignore[d] downsides to electronic medical records” and also discounted earlier research that failed to find benefits in shifting from paper to digital records. RAND’s assumption that automation would be a substitute for manual work proved false, as human-factors experts would have predicted. But the damage, in wasted taxpayer money and misguided software installations, was done. EMR systems are used for more than taking and sharing notes. Most of them incorporate decision-support software that, through on-screen checklists and prompts, provides guidance and suggestions to doctors during the course of consultations and examinations. The EMR information entered by the doctor then flows into the administrative systems of the medical practice or hospital, automating the generation of bills, prescriptions, test requests, and other forms and documents. One of the unexpected results is that physicians often end up billing patients for more and more costly services than they would have before the software was installed. As a doctor fills out a computer form during an examination, the system automatically recommends procedures—checking the eyes of a diabetes patient, say—that the doctor might want to consider performing. By clicking a checkbox to verify the completion of the procedure, the doctor not only adds a note to the record of the visit, but in many cases also triggers the billing system to add a new line item to the bill. The prompts may serve as useful reminders, and they may, in rare cases, prevent a doctor from overlooking a critical component of an exam. But they also inflate medical bills—a fact that system vendors have not been shy about highlighting in their sales pitches.

The biggest assumption that we tend to make with digital records is that they would necessarily be cleaner and would enable all sorts of digital improvements and technology advance. However, what we need to consider is that most digital entries still require human inputs. In the perspective of healthcare, electronic medical records have the potential to actually distract a doctor’s attention. Most software implementations also require the buy-in of the stakeholders and a good focus on data cleanliness. These things tend to derail digital implementation projects. from fulfilling their full potential.

Daily Tao – Nicholas Carr, The Glass Cage: Automation and Us – 4

Since its publication in 1908, the paper that Yerkes and Dodson wrote about their experiments, “The Relation of Strength of Stimulus to Rapidity of Habit-Formation,” has come to be recognized as a landmark in the history of psychology. The phenomenon they discovered, known as the Yerkes-Dodson law, has been observed, in various forms, far beyond the world of dancing mice and differently colored doorways. It affects people as well as rodents. In its human manifestation, the law is usually depicted as a bell curve that plots the relation of a person’s performance at a difficult task to the level of mental stimulation, or arousal, the person is experiencing. At very low levels of stimulation, the person is so disengaged and uninspired as to be moribund; performance flat-lines. As stimulation picks up, performance strengthens, rising steadily along the left side of the bell curve until it reaches a peak. Then, as stimulation continues to intensify, performance drops off, descending steadily down the right side of the bell. When stimulation reaches its most intense level, the person essentially becomes paralyzed with stress; performance again flat-lines. Like dancing mice, we humans learn and perform best when we’re at the peak of the Yerkes-Dodson curve, where we’re challenged but not overwhelmed. At the top of the bell is where we enter the state of flow. The Yerkes-Dodson law has turned out to have particular pertinence to the study of automation. It helps explain many of the unexpected consequences of introducing computers into work places and processes. In automation’s early days, it was thought that software, by handling routine chores, would reduce people’s workload and enhance their performance. The assumption was that workload and performance were inversely correlated. Ease a person’s mental strain, and she’ll be smarter and sharper on the job. The reality has turned out to be more complicated. Sometimes, computers succeed in moderating workload in a way that allows a person to excel at her work, devoting her full attention to the most pressing tasks. In other cases, automation ends up reducing workload too much. The worker’s performance suffers as she drifts to the left side of the Yerkes-Dodson curve.

The state of “flow”, where we are best challenged but not overwhelmed, is extremely pertinent to the study of how automation can change our lives for the better, or worse. It brings to the question, whether should we consider the human element, when we  automate tasks away for efficiency.

Should a little inefficiency be compromised, for optimal engagement and performance in the worker/customer/person? There is no real true answer to this, but it is important to take this into account when designing processes for people.

Daily Tao – Nicholas Carr, The Glass Cage: Automation and Us – 3

Automation bias is closely related to automation complacency. It creeps in when people give undue weight to the information coming through their monitors. Even when the information is wrong or misleading, they believe it. Their trust in the software becomes so strong that they ignore or discount other sources of information, including their own senses. If you’ve ever found yourself lost or going around in circles after slavishly following flawed or outdated directions from a GPS device or other digital mapping tool, you’ve felt the effects of automation bias. Even people who drive for a living can display a startling lack of common sense when relying on satellite navigation. Ignoring road signs and other environmental cues, they’ll proceed down hazardous routes and sometimes end up crashing into low overpasses or getting stuck in the narrow streets of small towns. In Seattle in 2008, the driver of a twelve-foot-high bus carrying a high-school sports team ran into a concrete bridge with a nine-foot clearance. The top of the bus was sheared off, and twenty-one injured students had to be taken to the hospital. The driver told police that he had been following GPS instructions and “did not see” signs and flashing lights warning of the low bridge ahead.

Interesting excerpt that introduces the concept of automation bias. It is when we began to rely too much on technology that we no longer function consciously, as in the case of following GPS instructions.

There might be no way to avoid this as more parts of our lives get simplified by technology. But, it is between the intersections of technology and human input that we have to be aware of this potential bias and be mindful in keeping it in check.

While we can trust what computer systems output to us most (if not almost all) the time, it is still healthy to keep ourselves thinking actively and react when the technology is wrong.

Daily Tao – Nicholas Carr, The Glass Cage: Automation and Us – 2

Air travel’s lethal days are, mercifully, behind us. Flying is safe now, and pretty much everyone involved in the aviation business believes that advances in automation are one of the reasons why. Together with improvements in aircraft design, airline safety routines, crew training, and air traffic control, the mechanization and computerization of flight have contributed to the sharp and steady decline in accidents and deaths over the decades. In the United States and other Western countries, fatal airliner crashes have become exceedingly rare. Of the more than seven billion people who boarded U.S. commercial flights in the ten years from 2002 through 2011, only 153 ended up dying in a wreck, a rate of two deaths for every million passengers. In the ten years from 1962 through 1971, by contrast, 1.3 billion people took flights, and 1,696 of them died, for a rate of 133 deaths per million.17 But this sunny story carries a dark footnote. The overall decline in the number of plane crashes masks the recent arrival of “a spectacularly new type of accident,” says Raja Parasuraman, a psychology professor at George Mason University and one of the world’s leading authorities on automation. When onboard computer systems fail to work as intended or other unexpected problems arise during a flight, pilots are forced to take manual control of the plane. Thrust abruptly into a now rare role, they too often make mistakes. The consequences, as the Continental Connection and Air France disasters show, can be catastrophic. Over the last thirty years, dozens of psychologists, engineers, and ergonomics, or “human factors,” researchers have studied what’s gained and lost when pilots share the work of flying with software. They’ve learned that a heavy reliance on computer automation can erode pilots’ expertise, dull their reflexes, and diminish their attentiveness, leading to what Jan Noyes, a human-factors expert at Britain’s University of Bristol, calls “a deskilling of the crew”.

Automation of processes have, for the most part, enabled the betterment of our lives and made many painful tasks easier. However, we now have to consider “deskilling” as an inevitable consequence of automation, where humans would get worst at certain tasks, like flying in the instance of this excerpt.

Relying on computer systems might not be a bad thing. It is only when the system fails and perhaps more input is required from humans where things begin to unravel.

Daily Tao Nicholas Carr, The Glass Cage: Automation and Us – 1

Such forecasts are easy to dismiss. Their alarmist tone echoes the refrain heard time and again since the eighteenth century. Out of every economic downturn rises the specter of a job-munching Frankenstein monster. And then, when the economic cycle emerges from its trough and jobs return, the monster goes back in its cage and the worries subside. This time, though, the economy isn’t behaving as it normally does. Mounting evidence suggests that a troubling new dynamic may be at work. Joining Brynjolfsson and McAfee, several prominent economists have begun questioning their profession’s cherished assumption that technology-fueled productivity gains will bring job and wage growth. They point out that over the last decade U.S. productivity rose at a faster pace than we saw in the preceding thirty years, that corporate profits have hit levels we haven’t seen in half a century, and that business investments in new equipment have been rising sharply. That combination should bring robust employment growth. And yet the total number of jobs in the country has barely budged. Growth and employment are “diverging in advanced countries,” says economist Michael Spence, a Nobel laureate, and technology is the main reason why: “The replacement of routine manual jobs by machines and robots is a powerful, continuing, and perhaps accelerating trend in manufacturing and logistics, while networks of computers are replacing routine white-collar jobs in information processing.”

On one hand, historical precedence has shown that every major technological shift has not resulted in massive unemployment and job loss. On the other hand, technology has progressed so rapidly in the past few decades and for the first time, is able to actually replace certain cognitive functions of humans.

In this book, it covers the possible perils and negative effects automation has on us. It is not just about the economy, as usually covered on media, but also the way automation can impact the way we live and think.

Daily Tao – Andrew Shtulman, Scienceblind: Why Our Intuitive Theories About the World Are So Often Wrong – 5

As it turns out, teaching mathematical equivalence with both standard and nonstandard problems is more effective than starting out with only standard problems. In one study, researchers created two workbooks. One workbook contained only standard problems (4 + 3 = __), and one had problems with the same addends arranged in nonstandard formats (__ = 4 + 3). They then tested second-graders’ ability to answer each type of problem, standard and nonstandard, after completing each type of workbook. Not surprisingly, children who practiced nonstandard problems in their workbooks answered more nonstandard problems correctly than did children who practiced only standard problems. But children who practiced nonstandard problems also answered more standard problems correctly—twice as many, in fact. And this difference was observed even six months later. Both groups of children solved the same problems content-wise, but subtle differences in the formatting of these problems led to large and reliable differences in how much children learned from them. The point of this study, as well as the chicken-sexing study, is not just that some study materials are better than others. It’s that effective instruction requires an in-depth analysis of what concepts need to be learned and how those concepts are best conveyed.

Interesting last excerpt I’ll be sharing from this book. Like how we can vary problems and questions to our own children for their own learning, we can always craft our own nonstandard problems to help understand things better. Sometimes, running through concepts in our head and framing questions of the things we’ve learned can help us develop a better understanding of what we’ve read.

Daily Tao – Andrew Shtulman, Scienceblind: Why Our Intuitive Theories About the World Are So Often Wrong – 4

In short, intuitive theories focus more on the perceptible than the imperceptible, more on things than processes, and more on objects than contexts. Each of these themes extends to several intuitive theories but not all. Intuitive theories differ in form and function, and it would be a mistake to try to shoehorn all such theories into a single category. Failing to appreciate the role of molecular motion in heat transfer is substantively different from failing to appreciate the role of tectonic plates in volcanism. Failing to abstract a notion of density from the perceptual experience of heft is substantively different from failing to abstract a notion of germs from the perceptual experience of contagion. Any educator who wants to help students confront and correct their intuitive theories needs to tailor his or her instruction to those theories. There is, however, at least one common thread running through all intuitive theories: they are narrower and shallower than their scientific counterparts. They are narrower in what they explain, and they are shallower in how they explain it. Intuitive theories are about coping with present circumstances, the here and the now. Scientific theories are about the full causal story—from past to future, from the observable to the unobservable, from the minuscule to the immense.

Intuitive concepts that we naturally form vs scientific concepts that were developed from research. Our intuition comes from our experiences, and that can be very helpful in many day to day situations and dealing with people, but we do need to be able to correct our intuitive theories.

Daily Tao – Andrew Shtulman, Scienceblind: Why Our Intuitive Theories About the World Are So Often Wrong – 3

Genes are involved in all our behavior, at some level, but geneticists are beginning to ascertain the links between particular genes and particular behaviors. And when we learn about such links, we tend to endow them with significance—more significance than they deserve. For instance, people who endorse genetic explanations for obesity believe they have less control over their weight than do people who doubt such explanations.Merely reading a newspaper article containing genetic explanations for obesity leads people to eat more junk food than they typically would. Equally problematic, female test takers perform significantly worse on standardized math tests if they take the tests after having read a genetic explanation for why women are underrepresented in math- and science-related professions. Our beliefs about our genes may affect our behavior more strongly than do the genes themselves. In the case of mathematics achievement, for instance, the evidence for innate gender differences is weak, but the evidence for socially primed gender differences is strong. Ironically, as scientists discern the limits of genetic influences on behavior, our knowledge of any influence at all can lead us to behave more fatalistically. Our genes do not dictate our destiny, but our beliefs about our genes may, if we let them.

What we believe, and maybe even what we read on a day to day basis might have a stronger influence on our behaviours than we might think. Even accepting that we are limited by our genes in anyway might dictate the way we behave and the effort we put in. The irony of the self-fulfilling prophecy.