Susceptibility to mis/disinformation

Exposure to mis/disinformation is widespread online. A survey of UK social media users found that as many as 57 percent believed they had seen inaccurate news on social media in the previous month, and another one-fifth of respondents could not tell whether they had or not (Chadwick & Vaccari 2019). During the COVID-19 pandemic, between half and one-quarter of the British public, depending on the period, reported coming across false or misleading news about the virus and an additional one-quarter was not sure (Ofcom 2020). Hence, it becomes important to identify factors that make individuals susceptible to mis/disinformation.

Political and social psychological factors

Humans respond to the uncertainty in our world through ‘shared sensemaking’, including sharing rumours and doubtful information (DiFonzo 2008, 10), which now occurs predominantly via social media and mobile messaging apps. Importantly, the framing of information under uncertainty can influence how we perceive a situation (Tversky & Kahneman 1986), as can our affinity for, or aversion to, uncertainty (Sorrentino & Roney 2000). Individuals favouring the familiar are likely to seek out certainty in societal groups (Lane 1962) that define their social identities (Hogg et al. 2007). Uncertainty reinforces feelings of in-group belonging and out-group enmity, creating inflated perceptions of differences between groups (Sherman et al. 2009). This is exacerbated if intergroup competition is seen as a zero-sum game, in which gains for one group become losses for the other (Mason 2018).

Such intergroup conflict is arguably characteristic of party politics, particularly in majoritarian two-party systems in which there is little hope of cooperation across the aisle. To this end, exposure to messages reinforcing inter-party differences can stoke divisions and increase polarisation (Vaccari 2018), feeding into the creation of opposing shared realities (DiFonzo 2008). Indeed, as political groups are perceived as ever more disparate, they are simultaneously viewed by in-group members as ever more clearly defined (Sherman et al. 2009). In many political systems, personal identities and political affiliations have become increasingly aligned (Arceneaux & Vander Wielen 2017), creating socially homogenous parties with ever more intolerant members (Mason 2018), thus reinforcing feelings of ingroup belonging that validate one’s worldviews (Hogg et al. 2007).

Worldviews provide the basis for political ideologies, or the rationalised beliefs of a group employed by members in countering their opponents (Lane 1962). Individuals’group identities therefore inevitably affect their political judgments, making it unlikely that they self-correct erroneous out-group beliefs for fear of undermining their worldview (Mason 2018). Instead people use heuristics, or stereotypes, concurrent with their worldview to cue their decisions (Lane 1962). Accordingly, messages alluding to group stereotypes can encourage acceptance of misleading information about political out-groups (Vaccari 2018; Nyhan 2018), discouraging critical reflection. Indeed, the term ‘political sectarianism’ has recently been proposed to more precisely capture the moralized nature of partisan identifications (Finkel et al. 2020). When political identities combine othering, aversion, and moral repulse towards other political groups, partisans become more willing to intentionally discount information that does not support their views and are even prepared to accept the use of anti-democratic tactics from their side if they can secure victory against opponents seen as immoral.

To this end, understanding the world mainly from a group perspective can contribute to social polarisation, or action based in prejudice and emotional volatility. People emotionally invest in the maintenance of their group identity even when this is irrational because questioning an affiliation incorporating core aspects of one’s identity risks increasing uncertainty. In this vein, scholars contend that decision-making unavoidably begins with unconscious emotional intuition (Arceneaux & Vander Wielen 2017). Indeed, existing affect can prevent the rational processing of new information (Redlawsk 2002), and emotions are often the main catalyst for rumour transmission, overriding concerns for veracity (DiFonzo 2008; Lewandowsky et al. 2012). Individuals motivated to experience strong emotions need only to feel that they are accurate, forming strong opinions from gut reactions and failing to interrogate their intuition, and are thus particularly susceptible to believing and sharing mis/disinformation (Anspach, Jennings & Arceneaux 2019). While the mechanisms described here have existed for a long time, recent technological changes have arguably magnified their impact and implications.

Technological factors

The affordances of digital media exacerbate susceptibility to worldview-congruent mis/ disinformation as they tend to facilitate instantaneous, uncritical behaviour based on little cognitive consideration (Bartlett 2018). Indeed, social media platforms design and constantly update their affordances to retain users’ attention, mainly by providing content they are more likely to interact with (Bucher 2018). Whilst users do have capacity to curate the content they see online, giving rise to concerns about echo chambers (Lewandowsky et al. 2012), research suggests that only a minority of users consistently avoid political content they disagree with (Vaccari & Valeriani forthcoming). Indeed, most social media users have ideologically diverse online networks and are therefore indirectly exposed to unfamiliar perspectives (Dubois & Blank 2018), although such exposure may reinforce rather than challenge users’ pre-existing beliefs (Bail et al. 2018) and has been associated with the sharing of mis/disinformation (Rossini et al. 2019).

The online environment is also attractive to nefarious actors running propaganda and disinformation campaigns (Narayanan et al. 2018), who take advantage of social media business models prioritising clicks and data capture over content quality. Such campaigns employ computational propaganda techniques, manipulating algorithms, cluttering conversations, and hijacking public spaces (such as hashtags) to reach users with inauthentic content (Sanovich & Stukal 2018). They further utilise microtargeting to tap into users’ identities, preferences, and prejudices, thus making it harder for users to recognise and reject mis/disinformation consistent with their beliefs (Bartlett 2018). Another concern is the rapid evolution of technologies that can distort audio-visual content, particularly in the creation of so-called deepfakes: synthetic videos generated to appear real by artificial intelligence software, increasingly available in open-source format and trained with publicly available data. Recent research suggests that even if individuals may not be misled by deepfakes, many react to them with uncertainty, reducing trust in all news encountered on social media as a result (Vaccari & Chadwick 2020).

Contextual factors

Political and media institutions can also affect the production, circulation, and impact of mis/ disinformation. According to Humprecht, Esser & Van Aelst (2020), disinformation-resilient countries feature both an infrastructure that protects most citizens from exposure to false information (including strong public service broadcasters) and a citizenry less likely to believe and disseminate, and more likely to challenge, poor quality information. In an assessment of 18 Western democracies, the researchers found countries in Northern Europe (including Denmark, Finland, and The Netherlands) to be the most disinformation resilient, whilst Southern

European countries (including Italy, Spain, and Greece) and the United States were found to be particularly susceptible. This is confirmed by survey results showing greater concern about online mis/disinformation from citizens in South America (Brazil), the US, and the UK than from respondents in Germany and The Netherlands (Newman et al. 2019).

A nation’s resilience to mis/disinformation also depends on the type of social media platform favoured by its citizens. Notably, large WhatsApp and Facebook groups have become popular means of sharing and discussing news in non-Western countries such as Brazil, Malaysia, and Turkey. WhatsApp groups were exploited in both the 2018 Brazilian presidential election (Machado et al. 2019) and the 2019 Indian general election (Narayanan et al. 2019) to distribute misleading and divisive political content. Notably, information flows on mobile messaging apps are encrypted and generally private, making it hard for news organisations and digital platforms themselves to observe, correct, and limit the spread of inaccurate content via this medium.

< Prev   CONTENTS   Source   Next >