Pennycook, Rand, and Eckles recently teamed up with Ziv Epstein, Mohsen Mosleh, and Antonio Arechar to put this approach to the test in a series of experiments reducing the spread of misinformation online. They found that subtly nudging people to think about the accuracy of what they read can increase the quality of the information they share. A separate experiment by Brendan Nyhan and his colleagues showed that fake news labels reduced the perceived accuracy of false headlines. Taken together, these results suggest that subtle cognitive nudges to think about accuracy and veracity can dampen the spread of untrustworthy information in social media. That is good news, because labeling and nudges to consider accuracy are unobtrusive solutions that scale. But the solution is not perfect. In the study by Nyhan and colleagues, fake news labels also decreased the belief in true news, suggesting the labels created a general distrust in news, which mirrors what happened when the SEC outed the fake news circulating on stock market news sites (as I described in Chapter 2). Furthermore, labeling fake news can create an “implied truth effect” whereby consumers assume news that isn’t labeled must be true simply because it has avoided being debunked. As we pursue labeling, we must ensure that it counters fake news effectively while avoiding known difficulties in its implementation. I advocated strongly for labeling fake news in my 2018 TEDx Talk at CERN in Geneva. Since then, the major platforms have adopted labeling as a proactive approach to routing out misinformation. Twitter began labeling “manipulated media” in March 2020, including sophisticated deepfakes and simple, deceptively edited video and audio that is either fabricated or altered to the point that it changes the meaning of the content. While Facebook moved to label false posts more clearly in October 2019, they have so far refused to do so for political advertising or content. When Twitter applied its new manipulated media label to a video of Joe Biden edited to make him look like he was conceding that he couldn’t win the presidency, Facebook was blasted by the Biden campaign for failing to label the manipulated video. These judgement calls and the details of misinformation labeling policies will be the front lines of the fight to transparently distinguish truth from falsity. It’s important that we make these policies as effective as possible while avoiding some of their documented shortcomings.
Labelling fake news as misleading and nudging people to think about the accuracy of what they read definitely works. However, this has always been a solution that I have been uncomfortable with because as a principle, I do believe that everyone should always have the right to come to their own conclusions.
Yet, we all know that as humans, we are prone to biases, anchor ourselves to our initial view points and tend to change our viewpoints to match the “tribes” we identify with. Naturally, this leads to more conflict and it becomes difficult to move a nation together especially in times of crisis (see our vaccination debate today).
I’m not sure if there will ever be a clear solution that can balance both the risk of over censorship and individual rights to come to their own conclusion. Benevolent leaders, are within their rights to implement such initiatives for the common good. Trusting malevolent leaders with such power though will be a slippery slope. Unfortunately this is a debate in my head that doesn’t seem like it will ever be resolved.