Science misinformation as a public problem
The exact prevalence of misinformation and misperceptions is not entirely clear. What is clear, we think, is that some degree of public concern is warranted. But it is still important to keep in mind certain findings that may temper our alarm about misinformation in general. For example, political misinformation in the 2016 US election
only reached a small proportion of the public who have the most conservative online information diets [Guess et al. 2020] and who are heavy Internet users (Nelson and Taneja 2018), while those who are exposed to misinformation do not always believe what they read (Allcott and Gentzkow 2017).
(Li 2020: 126)
Yet, given the extent to which science is interwoven into governmental decision-making (Dietz 2013), there may be unique reasons to be concerned when it comes to misperceptions about science. Many of the most prominent misperceptions concern science topics. In a metaanalysis on health-related information, particularly vaccines and infectious diseases, Wang et al. (2019: 8) conclude, ‘there is broad consensus that misinformation is highly prevalent on social media and tends to be more popular than accurate information’. In addition, large proportions of the public hold misperceptions about climate change, the safety of GMOs, or a link between vaccines and autism (Flynn et al. 2017). Using 2019 ANES data, Jerit et al. (2020) show that the percent of respondents holding these misperceptions are 25.4 percent, 46.6 percent, and 15.5 percent, respectively. Sixty percent of respondents held at least one misperception, and individuals holding misinformed beliefs were confident in their answers.
A further reason for concern is that people do not invariably update their beliefs when offered information intended to correct or fix misperceptions. For example, in a meta-analysis of 32 experimental studies, Walter and Tukachinsky (2020) find that corrections do not eliminate the effects of misinformation, even though they tend to move attitudes in the intended direction on average. This broad conclusion appears transferable to science misperceptions specifically. Scholars have tested whether individuals, when exposed to corrective information, update their misperceptions concerning vaccines (Nyhan and Reifler 2015; van Stekelenburg et al. 2020), GMOs (Bode and Vraga 2015), climate change (Vraga et al. 2019), food safety (van Stekelenburg et al. 2020), or the size of the federal science budget (Goldfarb and Kriner 2017). The evidence confirms that corrections have a mixed record. As we discuss later, many scholars have sought to better understand the conditions by which corrections lead individuals to update their attitudes.
Why do individuals hold misperceptions about science?
A myriad of individual-level and group-level factors have been theorised to influence the prevalence of misperceptions about science. The former concerns psychological processes following exposure to misinformation, and the latter concerns systemic factors that affect exposure in the first place.
Individual ability and motivation to evaluate science
The likelihood that individuals develop misperceptions about science depends on their ability and motivation to critically evaluate science information and then recognise and reject misinformation. Regarding ability, individuals’ limited epistemic knowledge about science constrains their capacity to assess science information. For example, findings from a recent national survey show that 77 percent of respondents could not explain the idea behind a scientific study, and 36 percent had trouble understanding probability, signaling major obstacles to conceptual understanding of the scientific process (Scheufele and Krause 2019).
Individuals also bring motivations to their consumption of science information that affect the likelihood of forming misperceptions, regardless of their abilities. Theories of motivated reasoning posit that people access and evaluate information in ways that fulfil certain goals and motivations (Kunda 1990). Individuals can be driven by accuracy motivation — in which case they are motivated to arrive at accurate conclusions — or they can be driven by directional motivations — in which case they are motivated to arrive at a particular, desired conclusion. For many policy decisions, scientific consensus serves as the most accurate, ‘factually competent’ information available (Dietz 2013). However, individuals with directional motivations may pursue reasoning strategies that lead them to reject such consensus and develop misperceptions (Pasek 2018).
A common directional goal is one’s motivation to defend a prior-standing belief: individuals exposed to scientific information may resist updating their attitudes when the information does not cohere with standing beliefs. For example, Ma et al. (2019) studied the effect of a consensus message concerning human-induced climate change. The results showed a backlash in which people who entered the study sceptical of climate change were unpersuaded by the message and updated their beliefs in the opposite direction. Druckman and Bolsen (2011) similarly show that, once people form opinions about emerging technologies like carbon-nanotubes and GMOs, they cling to those opinions even in the face of contradictory scientific information. Thus, people may maintain science misperceptions due to a motivation to protect their beliefs.
One’s defence of group-based identities constitutes another source of directional motivation. Many people want to maintain the beliefs held by their valued groups, regardless of scientific accuracy, to protect against costly social ostracisation. For example, partisans in the US often form beliefs on climate change, fracking, and other scientific issues to align with fellow partisans, regardless of the science (Kahan 2015). Identity-protective motivation does not guarantee misperceptions, but it is likely that individuals will develop misperceptions because their goal is group alignment instead of accuracy. One exception, though, may be if the relevant group are scientists themselves, van der Linden et al. (2018) argue that some individuals see scientists’ beliefs as a relevant group norm, which drives them to align their beliefs with the scientific consensus.
Core values can also underlie a directional motivation. In such cases, individuals may only accept science information if it fits their value system. Lewandowsky et al. (2013) find that conservatives and individuals who value free markets reject climate change when the science implies heavier government regulation. In another study, conservatives deny climate science when it is framed in terms of ‘fairness’ but accept it when it is discussed in terms of the more cherished value of‘sanctity’ (Wolsko et al. 2016).
It is worth noting that individuals with accuracy motivation can still develop misperceptions about science. For instance, people who rely on scientific and social consensus as heuristics to help them achieve accuracy may be wrong about the consensus. Scheufele and Krause (2019) cite survey data indicating that a majority of respondents erroneously believe there is no scientific consensus regarding the health effects of GMOs or the proposition that the universe was created in the Big Bang. About one-third erroneously believe there is no scientific consensus on climate change and evolution. People also misestimate social consensus among their peers and the general public, particularly on environmental issues (Schuldt et al. 2019) and human-caused climate change (Mildenberger and Tingley 2019).
Furthermore, accuracy-motivated individuals may still exhibit directional bias regarding the information sources they judge to be accurate: individuals may evaluate scientists as trustworthy sources of information more frequently when sciences message is compatible with their standing beliefs (Kahan et al. 2011). As a result, partisan perceptions of source credibility can lead to polarised views on matters of science, despite accuracy motivation (Druckman and McGrath 2019).
Systemic factors that encourage misinformation and misperceptions
While individual-level factors influence susceptibility to misperceptions given exposure to misinformation, systemic factors determine the overall permeation of misinformation into the informational environment in the first place. One important factor is politicisation. The politicisation of science occurs when actors exploit uncertainty in the scientific process to cast doubt on findings (Bolsen and Druckman 2015, 2018a). Unfortunately, the inherent uncertainty of science can be difficult to communicate to the public. Even when certain conclusions constitute a scientific consensus, they are vulnerable to politicisation (Druckman 2017), and misinformation is more likely to spread as various science issues become politicised.
Another closely related factor is partisan polarisation. Growing evidence suggests that enflamed partisan tensions in a more polarised political environment abet the spread of misinformation. In a study of 2,300 American Twitter users, Osmundsen et al. (2020) find that individuals with out-party animus are more likely to share fake news, particularly when they are Republicans. This suggests that polarisation is partly fueling the proliferation of misinformation on the internet. In addition, as political elites polarise on high-profile science issues like climate change, rank-and-file partisans have clearer cues about the ‘correct’ party position. In such cases, partisans attend more to party endorsements than to substantive information when forming opinions (Druckman et al. 2013). Therefore, a polarised information environment implies both greater exposure to partisan misinformation and higher individual propensities to use party cues, increasing the likelihood that individuals form misperceptions.
Third, the evolved information environment may now be more amenable to spreading misinformation through the rise of bots and trolls on social media platforms, the use of algorithms designed to garner clicks that reward outlandish stories, and the influence of dark money (Lazer et al. 2018; Iyengar and Massey 2019). Researchers hypothesise that structural aspects of technology and digital media facilitate the increased spread of misinformation as a larger interpersonal network size, greater deindividuation, and ability to share or post immediately provide fewer constraints and inhibitions on online behavior (Brady et al. n.d.). Gossiping and the sharing of outrageous content, which would be costly behaviors in physical settings, are less costly online and may even yield rewards in the form of positive social feedback (Crockett 2017).
Given such social feedback incentives, misperceptions can be especially contagious if shared in a morally charged environment. Brady et al. (n.d.) s MAD model of moral contagion shows how the growing strength of partisan group identity in the American electorate might enhance individual-level motivations. As a result, individuals are likely to share morally charged political messages that attack out-group members and elevate in-group members. The informational value of shared content is secondary to the social status and positive social feedback sharers receive from like-minded partisans in their social network. Thus, content that sparks moral outrage along partisan lines may quickly spread to a large number of viewers, regardless of its accuracy.
While the last quarter century of media and social transformations have had many positive benefits, there have also been negative consequences that can increase the likelihood of scientific misperceptions. Politicised science, polarised parties, social media practices, and the interaction of these forces can lead to the spread of misinformation and the formation of misperceptions.