Assessing the state of knowledge
Researchers are keenly interested in how polarisation and misinformation contribute to misperceptions (e.g. DiFonzo et al. 2016; Flynn et al. 2017; Leviston et al. 2013; Pennycook et al. 2018; Thorson et al. 2018). The fact that the digital media environment is thought to facilitate both polarisation and misinformation exposure makes it a natural place to look for causal explanations. Evidence suggests there is more to the story. Cognitive accounts suggest misinformation effects should be highly conditional on predispositions, motivations, and context. Causal effects from affective polarization are just as much about how misinformation is processed as they are about dictating exposure to misinformation.
While the digital media environment provides ample opportunity for misinformation exposure, beliefs drive information consumption and processing more than they are shaped by them. And as demonstrated by the US — a context of growing affective and elite polarisation — motivated reasoning is only likely to increase the influence of existing beliefs. If partisan identity is so rigidly in place as to drive even the evaluation of information upon exposure, the likelihood that misinformation about politics and issues has much potential for persuasive effects among out-partisans should be minimal in polarised contexts.
Yet whether there are important behavioral effects from exposure remains an open question. We know that exposure to misinformation can make people more likely to endorse or embrace attitude-congruent misinformation, even when aware of inaccuracies (Gaines et al. 2007; Garrett et al. 2016; Schaffner and Luks 2018; Schaffner and Roche 2016). Even though evidence suggests direct persuasive effects of factual misinformation should be minimal — or, at the very least, conditional — despite high rates of mass exposure, this conclusion depends on several unanswered questions.
First, what effects are of most concern? Misinformation studies focus largely on direct persuasive effects, but what about indirect effects? Given the role strong partisans play as opinion leaders (Holbert et al. 2010), their vulnerability to attitude-consistent misinformation may have harmful indirect effects as they feel emboldened to endorse and/or share misinformation (Garrett et al. 2016; Messing and Westwood 2014). If there is a two-step flow in the digital media environment, and at least some evidence suggests there is (e.g. Feezell 2018), there may be cause for concern, especially considering that facts do not matter for what gets shared online (Weng et al. 2012; Vosoughi et al. 2018; Van Duyn and Collier 2019). If misperceptions are more about expressive responding or strategic endorsement than belief, should we be any less concerned? Or are the downstream effects just as troubling?
It could be that there are effects from the media environment, but the mechanisms through which they operate might be different than commonly articulated. Misinformation promotes the expression or endorsement of political misperceptions but not through persuasion with incorrect facts. Instead, misperceptions are embraced as partisans feel emboldened to endorse inaccurate but attitude-consistent beliefs (e.g. Garrett et al. 2016). In-network cheerleading and repetition might persuade those without strong political predispositions by enhancing familiarity, accessibility, and perceptions of accuracy (Weaver et al. 2007; DiFonzo et al. 2016; Schwarz et al. 2016; Leviston et al. 2013), but among partisans, it just reinforces willingness to accept false information and embrace misperceptions in service to identity (e.g. Garrett et al. 2016). In the context of trying to understand how digital media stoke populism, this kind of indirect process seems just as troublesome. Still, it is important that we do not mischaracterise the causal relationships.
Second, what kinds of attitudinal changes and behavioural outcomes should we be most concerned about? Currently, misinformation studies are primarily interested in persuasive effects producing attitude change or misperceptions, primarily regarding voting behavior. This is ironic given the difficult time researchers had demonstrating persuasive media effects (Iyengar
2017) , and the bar is much higher to change minds and sides than to undermine and discourage.
Evidence from recent elections underscores the point, showing the intent behind recent misinformation tactics — including recent Russian disinformation campaigns — was to use falsehoods to demobilise and discourage/ rather than persuade (Lewandowsky et al. 2017; Kim et al.
2018) .1 The facts are not relevant, and neither is attitude change. Rather, the relevant outcomes reflect disengagement with politics, such as declining participation among targeted groups, or related precursors, like apathy, efficacy, and cynicism (Lewandowsky et al. 2017). Few studies of misinformation focus on these outcomes; the literature provides few answers about behavioural effects (Lazer et al. 2018).
However, what we do know supports insights from cognitive explanations. Misperceptions are irrelevant for vote choice, more evidence downplaying the importance of facts (Swire et al. 2017a). Yet when the aim is to mobilise or demobilise, emotional appeals and divisive issues are often successful (Krupnikov 2011). Research on recent Facebook misinformation tactics reflects this, revealing effective strategies aimed at casting doubt and causing confusion (Kim et al. 2018). If the point is more to discourage and dissuade than to change hearts and minds, we should look at participatory outcomes or their precursors as opposed to vote choice or factual political questions. While the conditions for persuasive misinformation effects might be narrow, the potential for mobilisation and demobilisation seems profound (Lewandowsky et al. 2017).
Relatedly, what kinds of misinformation are most important? There is conceptual opacity across the literature on misinformation and misperceptions (Flynn et al. 2017; Jerit and Zhao
2020). Kuklinski and colleagues define political misinformation as when someone ‘holds the wrong information’ (2000, 792). Allcott and Gentzkow define fake news as ‘news articles that are intentionally and verifiably false and could mislead readers’ (2017, 213). Lazer and colleagues define it as ‘fabricated information that mimics news media content in form but not in organizational process or intent’ (2018, 1094). Slippage aside, these are all reasonable conceptualisations. Political misinformation exists in many forms and overlaps heavily with a range of variants from unintentionally erroneous information to purposefully misleading information to all-out fabrication. All of these are found in news, elite rhetoric, and other political communication. Yet recent evidence (e.g. Kim et al. 2018) suggests that these definitions miss the mark, either because they fail to capture outcomes intended by actors spreading misinformation or because they do not capture the tactics being employed with respect to message or source. If the aim is to divide, distract, or discourage, indicators based on facts tell only part of the story. Similarly, isolating studies to fake news might miss important paid media strategies like those used by stealth groups on Facebook (Kim et al. 2018). And limiting research to those messages that are demonstrably false rather than purposefully but only slightly misleading (Rojecki and Meraz 2016) may do the same.
Another underexplored question asks which targets of misinformation are most important to consider. Electoral misinformation tactics are targeted on the basis of class and race and use wedge issues to divide (Kim et al. 2018). This strategy is troubling because it appeals to populist sentiments and may be effective, further exacerbating racial, ethnic, and socio-economic divides. Partisanship provides one route for motivated reasoning; targeting other-group divides may be equally effective. According to the literature on media effects (e.g. Zaller 1992), if paid media allow for strategies based on intense, micro-targeted messaging, we should be most concerned about misinformation targeting citizens of low political interest and knowledge (Miller et al. 2016), with affective appeals meant to distract, obscure, and demobilise.