Next, we turn to antidotes — that is, strategies to combat the formation of scientific misperceptions. Many of these focus on communication strategies aimed at addressing the individuallevel psychological drivers of misperceptions. These include inoculation messages, corrections, shifting motivations, and framing. Finally, we also discuss possible interventions to respond to systemic sources of misperceptions.
One promising avenue to address misperceptions involves inoculations that warn people that they will be exposed to misinformation (Cook et al. 2017; van der Linden, Leiserowitz, et al. 2017; van der Linden, Maibach, et al. 2017). Inoculation theory (or 'prebunking’) posits that this kind of warning — provided through a ‘weakened dose’ of inaccurate information followed directly by a refutation — can result in resistance to misinformation. The inoculation works to establish an accurate standing belief in the recipient, which they will later ‘defend’ when they encounter misleading information. For example, Bolsen and Druckman (2015) test inoculation techniques in survey experiments asking respondents about two novel energy technologies. The authors show that warnings are effective in the face of politicised communications. Scientific consensus messages about the benefits of each technology moved opinion for respondents who received a warning prior to receiving a contrary, politicised message. By contrast, respondents who only received a politicised message ignored the scientific consensus. Similarly, van der Linden, Maibach, et al. (2017) warned respondents they would be exposed to belief-threatening information (‘some politically motivated groups use misleading tactics to try to convince the public that there is a lot of disagreement among scientists’) and then provided a pre-emptive refutation (‘there is virtually no disagreement among experts that humans are causing climate change’). As in the earlier experiment, this technique restored the impact of scientific-consensus messages, despite politicisation.
Cook et al. (2017) explore inoculation messages highlighting the argumentation tactics employed in the politicisation of climate science. Such tactics include the presentation of a ‘false balance’ of evidence and the use of‘fake experts’ to manufacture doubt. In one study, they report that an inoculation message was effective at neutralising the impact of a false balance of evidence. However, in a second study, the findings were more mixed: an inoculation message only reduced the effect of politicisation among respondents who valued free markets.
Koozenbeek and van der Linden (2019a, 2019b) investigate whether prompting respondents to actively engage with inoculation messages confers resistance to misinformation. In an initial study, participants were provided facts about an increase in the number of police-related incidents involving Dutch asylum seekers (Roozenbeek and van der Linden 2019a). Small groups of respondents were randomly assigned to produce a fake news article on the topic by role-playing one of four different types of ‘characters’, including: (1) the denier, (2) the alarmist, (3) the clickbait monger, and, (4) the conspiracy theorist. The results showed that participation in the game increased resistance to political misinformation in a fake news article. In a large follow-up study, participation in the game increased people’s ability to detect misinformation and resist it, irrespective of individual-level factors such as political ideology.
Taken together, the results of these studies suggest that pre-emptively refuting science politicisation may be an effective strategy when politicisation can be anticipated. While this is not always possible, the theory and evidence on inoculations do provide one potential route to reducing misperceptions.
A large literature explores the extent to which corrections lead individuals to discard factual misperceptions. As mentioned earlier, corrections do not entirely eliminate the effects of misinformation, but evidence suggests that they can be effective under particular conditions. Many findings in this regard follow from psychological concepts such as mental models or fluency. Individuals incorporate misinformation into larger ‘mental models of unfolding events’ (Lewan-dowsky et al. 2012: 114) and may ‘prefer an incorrect model over an incomplete model’ (114). Following, corrections tend to be more effective if they are detailed and provide an explanation of why a misperception is incorrect and less effective if individuals pre-emptively generate reasons supporting the misinformation (Chan et al. 2017). Additionally, individuals may perceive information to be more accurate when they can recall it more easily — the information is characterised by high fluency (Berinsky 2017). As a result, corrections are less effective when misinformation is repeated more often (Walter and Tukachinsky 2020), and many scholars agree that a correction should not repeat the misinformation itself (Cook and Lewandowsky 2011).
While this research has generated insights concerning the conditions for successful corrections, overcoming directional motivations continues to be a fundamental challenge. One can expect corrections to be less effective the more they threaten an individual’s worldview or preexisting attitudes. For instance, Bode and Vraga (2015) find that corrections are more effective for misperceptions concerning GMOs than vaccines. The authors attribute the difference in part to prior-attitude strength. In Bolsen and Druckman (2015), directional motivations account for the findings that corrections were far less effective than inoculations. As discussed previously, directional motivations also interact with individuals’ evaluations of information sources. Studies of misinformation corrections are consistent with evidence suggesting that individuals value source trustworthiness more than source knowledge and that directional motivations influence perceived trustworthiness (Jerit and Zhao 2020). This may be troubling in the context of science misperceptions as it suggests that scientific expertise is insufficient for bolstering the effect of corrections. Given the overarching challenge posed by directional motivation in the case of corrections, we now discuss strategies that seek to address individuals’ directional motivations and worldviews more directly.