Eun-Ju Lee and Soo Yun Shin
About midnight on the Christmas Eve of 2016, one of us got caught by her then-six-year-old son while secretly (and frantically) gift-wrapping his Christmas present. ‘So you bought our Christmas presents!’ Panic-stricken, she was fast searching for words to cover up, soon realising there was no need. ‘You give Santa our gifts so he can deliver them to us!’
Misbeliefs are hard to correct. Even when inaccurate beliefs are corrected, their influence may linger on (Johnson and Seifert 1994). In this chapter, we aim to evaluate the literature on the debunking of misinformation in light of the following questions. First, what do we mean when we say corrective information is (not) effective? A range of outcome variables has been adopted to assess how ‘effective’debunking efforts are, with varying success rates. Second, under what conditions is correction more and less effective? What source-, message-, receiver-, and context-related variables moderate the effectiveness of correction? Lastly, what psychological processes account for the successes and failures of debunking efforts? Understanding the cognitive mechanisms and potential biases that shape the correction process would help better counter misinformation. After addressing these questions, suggestions for future research are proposed with a view to developing theory-based recommendations for how to combat misinformation.
What does debunking misinformation do?
At its core, debunking misinformation entails exposing falsehood and inaccuracies in verifiable information, thereby correcting misbeliefs. Beyond the immediate and primary consequence of belief changes, researchers have examined the secondary effects of corrective information as well, such as attitudes and behavioural intention.
First and foremost, studies have examined how successful debunking messages are in rectifying individuals’ faulty knowledge and misbeliefs. For instance, people who received a corrective message to US president Trump’s misstatements regarding climate change showed better factual knowledge about climate change and the role of Paris Climate Accord than those who did not receive the correction (Porter et al. 2019). Similarly, corrective information concerning a new tobacco product led people to realise the health risks associated with it (Biener et al. 2007). Moreover, belief changes that followed the exposure to corrective information on the immigration issue in the US persisted up to four weeks (Carnahan et al. 2020).
Quite contrarily, attempts to correct misperceptions about vaccines were largely ineffective and often backfired, reinforcing rather than countering misbeliefs (e.g. ‘Some vaccines cause autism in healthy children’) (Pluviano et al. 2017; Pluviano et al. 2019). For political misbeliefs such as the existence of Iraqi WMD, a correction message increased misbeliefs, albeit only among conservatives (Nyhan and Reifler 2010). As a result, although a metaanalysis of 65 studies (N = 23,604) (Walter and Murphy 2018) confirmed that corrective messages, on average, significantly reduced misbeliefs (r = .35, p = .0005), it deserves attention when corrective information fails to overwrite previously acquired misinformation, leading to belief update.
Although attitudes towards an object are closely associated with the beliefs people hold about it, belief correction does not always incur corresponding attitude changes. In the aforementioned study, corrective messages about Trumps misstatements changed people’s factual beliefs about climate change but did not alter their policy preferences (Porter et al. 2019). Likewise, exposure to journalistic fact-checks of Trump’s campaign messages only improved people’s belief accuracy, with no significant change in their attitudes towards him (Nyhan et al. 2019). Such persistent influence of corrected information on attitudes is referred to as belief echoes (Thorson 2016). However, evidence to the contrary also exists. After estimating federal spending on scientific research and then learning that the actual spending fell short of their estimates, participants showed higher levels of support for increased spending in science (Goldfarb and Kriner 2017). Likewise, the provision of accurate information about Chinese investment and regulations over foreign investment induced a stronger preference for a Chinese investment proposal in Canada (Li et al. 2019).
One possible explanation for the inconsistency concerns the strength of prior attitudes — the stronger the existing attitudes, like those towards a celebrity politician (versus government spending), the less likely a mere correction of certain facts is to alter them. Similarly, when the overall attitude is based on a complex, multi-faceted belief repertoire (and the evaluations of each belief component), correcting a part of the repertoire might not be sufficient to induce attitude changes.
Harms of misinformation go well beyond ill-advised opinions and misperceptions. Amid the COVID-19 pandemic, fake remedies were spread fast and wide over social media, and the mistaken belief that toxic methanol protects against the virus killed nearly 300 people in Iran (Karimi and Gambrell 2020). Relatively few studies examined how debunking messages affect individuals’ behavioural intentions, either to follow the debunking message’s recommendation or to refrain from the action advocated in the corrected misinformation, and self-reported intentions might not predict their actual behavior with as much precision as we would like. Still, corrective ads for a suspicious weight-loss drug weakened consumers’ intention to try out the drug (Aikin et al. 2017), and providing facts against science denialism increased people’s willingness to perform behaviors supported by science, such as vaccination or taking actions against climate change (Schmid and Betsch 2019).
In addition to factual beliefs, attitudes, and behavioural intentions directly related to the specific content of misinformation, exposure to debunking messages may affect more generalised, higher-order cognitions. For people with low interest in a political issue, reading a news article containing fact-checks lowered their confidence in the ability to find the truth in politics (i.e. epistemic political efficacy) (Pingree et al. 2014). Similarly, frequent exposure to debunking information might cultivate chronic scepticism and make ‘false until proven true’ the default mode of processing any information. Alternatively, as people get used to seeing false information being tagged as such, they might come to assume that information with no tag is truthful (i.e. implied truth effect) (Pennycook et al. 2020). Rather than treating these effects as unintended by-products, future research should look beyond the immediate correction of misbeliefs and attitude changes and explore second-order effects that can have longer-lasting impacts on how people interpret and respond to mediated information.