Misinformation and disinformation
Rachel Armitage and Cristian Vaccari
This chapter will provide an overview of the debates surrounding misinformation and disinformation in political communication. After defining the key terms, we will contextualise this discussion, connecting recent concern about mis/disinformation with the growth of populism, declining trust in the media, and the role of digital platforms. We will then examine the key political, psychological, technological, and contextual factors that make people susceptible to mis/disinformation, before assessing current efforts to identify and tackle factually problematic content online. We conclude with broad recommendations for future efforts in this field.
Defining the key terms
Descriptors of problematic information range from general catch-all terms to specific references, competing in disputed hierarchies that change across people, context, and time. Terms including ‘junk news’ (Narayanan et al. 2018) and ‘fake news’ (Lazer et al. 2018) have variously been employed to represent manipulated, fabricated, extremist, satirical, sensationalist, parody, propagandist, and conspiratorial content (Tandoc, Lim & Ling 2018). This lack of definitional consistency potentially underlies conflicting academic findings (Tucker et al. 2018), illustrating how terminology can influence both the perception of the problem and its solution.
The most commonly accepted distinction between types of problematic information is misinformation and disinformation. Both terms refer to the sharing of incorrect, inaccurate, or misleading content, but they are separated by intentionality. While misinformation entails accidentally sharing inaccurate content, disinformation constitutes deliberate deception, often based on outright fabrications (Jack 2017). Difficult as it may be to determine the intentions of mis/disinformation sharers — especially when they are ordinary social media users — this distinction captures an important normative as well as empirical difference.
Some authors have suggested additional terms to cover conceptual space bordering mis/ disinformation, including ‘xisinformation’, in which it is hard to parse a sharer’s intent (Jack 2017, 16), and ‘mal-information’ to describe intentionally harmful sharing of accurate information (Wardle & Derakhshan 2017, 5). However, this ongoing semantic discussion may distract from the need to tackle such problematic information (Weeks & Gil de Zuniga 2019). Any suchefforts must start from a comprehensive understanding of the contextual factors that enable mis/ disinformation to take root.
Contextualising the challenge of mis/disinformation
Whilst misleading information is not a new phenomenon, increasing concern about global mis/ disinformation has revealed contemporary catalysts behind such problematic content (Lazer et al.
2018) . One contributing factor is the rise of populism in political systems around the world. Populist leaders often attack experts and independent journalists, alleging bias in response to any negative media coverage (Newman et al. 2019). This is particularly effective in an increasingly polarised context, where populist supporters and detractors are ever less willing to engage with one another (Mason 2018). As a result, mis/disinformation has gained a stronger foothold in public discourse, reducing the space for evidence-based debate (Nyhan 2018). Political actors themselves often bear responsibility for spreading disinformation, especially during election campaigns. Prior to the United Kingdoms 2016 referendum to leave or remain in the European Union (EU), there was a preponderance of misleading content favouring the Leave campaign on Twitter, with problematic assertions by the campaign itself achieving greater traction than disinformation efforts by outside groups (Gorrell et al. 2019). Similar activity was decried during the 2019 UK general election, with politicians accused of ‘playing fast and loose with the facts, avoiding journalistic scrutiny, and denigrating the media’ (Newman 2020, 12).
Accordingly, populism and political polarisation have contributed to the decline of trust in mainstream media and news organisations. In the UK, trust in news has been falling since 2015, and even the BBC is now seen as having an agenda, especially regarding divisive issues such as Brexit (Newman et al. 2019). Populist supporters are particularly likely to distrust independent news media, which they see as part of despised elites (Fawzi 2019), limiting the ability of the established media to authoritatively correct mis/disinformation. One possible outcome is the cultivation of an ‘anything-goes’ mentality among social media users (Chadwick & Vaccari
2019) , who may become less vigilant about the quality of news they share as online social norms are eroded, and establishing the truth becomes increasingly difficult and contested.
Indeed, the broad proliferation of mis/disinformation has arguably been accelerated by social media (Wardle & Derakhshan 2017). Social networking sites have challenged the role of traditional media as information gatekeeper (Resnick, Ovadya & Gilchrist 2018), lowering the cost of entry to the production and distribution of news and thereby vastly increasing the quantity (but not necessarily the quality) of available content (Lazer et al. 2018). This has allowed politicians — especially those who can afford mass-scale digital advertising — to communicate directly with the public, devoid of the restrictions normally accompanying journalistic mediation (Siegel 2018). Social media has further facilitated the artificial inflation of problematic content via bots (automated accounts) and cyborgs (hybrid human/automated accounts), as well as empowering average citizens — including partisan activists — to quickly create and widely disseminate material of varying veracity (Cook, Ecker & Lewandowsky 2015), often using content from news organisations as a resource to influence others (Chadwick, Vaccari & O’Loughlin 2018). Mobile messaging apps (such as WhatsApp, Snapchat, Facebook Messenger, and Vine) pose distinctive challenges, facilitating private informal discussions between small, strong-tie networks that may be less guarded against mis/disinformation (Valeriani & Vaccari 2018). However, users are more likely to both issue and receive corrections when they share mis/disinformation on WhatsApp than they are on Facebook, possibly because they are less fearful of suffering backlash for challenging members of their strong-tie networks than they are when they engage with their weak ties on the more open social media platforms (Rossini et al. 2019).