Digital platforms and disinformation tactics

The infrastructure of platforms facilitates anti-immigrant disinformation in many ways. Engagement metrics incentivise attention-grabbing, low-quality content, and these metrics can be manipulated by bad actors who piggyback on trending content and use false accounts and automated ‘bots’ to inflate the popularity of content (Shao et al. 2018). Moreover, micro-targeting services and recommendation algorithms define users by interests with little regard for whether these interests are extremist (Angwin et al. 2017). More generally, platforms enable disinformation to travel at an unprecedented speed and scale. Tornberg and Wahlstrom (2018) argue that social media provide multiple opportunity mechanisms for anti-immigrant disinformation, including discursive opportunities to exploit topical issues, group dynamic opportunities to strengthen community ties, and coordination opportunities to target different audiences. In this context, some argue that social media platforms have given rise to a digital hate culture (Ganesh 2018) augmented by the coordinated action of anonymous and automated accounts (Phillips 2015; Zannettou et al. 2018).

As noted earlier, the far-right uses different platforms for community building and targeting wider audiences. The segmentation of the far right’s online activity is partially a response to pressure from internet service providers (ISPs); as ISPs removed technical support from platforms known to foster extremist ideologies, activity moved to new platforms with strict free speech policies (Zannettou et al. 2018). Consequently, anti-immigrant disinformation campaigns are coordinated on these lesser-known online platforms, and the messages are then disseminated to a wider audience on popular platforms such as Twitter and Facebook (Davey and Ebner 2017; Marwick and Lewis 2017).

Platform affordances also facilitate the online manipulation and disinformation tactics of the far right and the alt-right. These activities broadly revolve around four tactics: appropriating existing hashtags (Jackson and Foucault Welles 2015), decontextualising news stories (Wardle and Derakhshan 2017), the use of memes (Ekman 2019), and automated bots (Avaaz 2019). In a detailed analysis of Twitter activity, Graham (2016) identified key disinformation tactics that enable anti-immigrant actors to direct their messages to a wider, mainstream audience. Actors utilise ‘piggybacking’ and ‘backstaging’ manipulation tactics to infiltrate trending topics while the ‘narrating’ tactic inverts the meaning of trending topics to reframe the original meaning through irony.

This process of appropriating existing hashtags has also been characterised in terms of ‘hijacking’ (Jackson and Foucault Welles 2015) and the promotion of‘critical counter-narratives’ (Poole et al. 2019). For example, during the 2016 US presidential election, far-right activists routinely used the hashtag #StopIslam in conjunction with pro-Trump and anti-Clinton hashtags as part of a broader effort to normalise an anti-immigration narrative and to introduce anti-immigrant disinformation into the election campaign (Poole et al. 2019). Other studies have identified the use of this manipulation tactic in relation to the refugee crisis (Siapera et al. 2018) and Brexit (Green et al. 2016). These hashtag campaigns support the formation of‘affective’ (Papacharissi 2015) or ‘ad hoc’ (Dawes 2017) publics that facilitate the circulation (Groshek and Koc-Michalska 2017) and fermentation (Farkas et al. 2017) of far-right narratives and attitudes. While the ad hoc publics created by hashtag campaigns tend to be short lived (Poole et al. 2019; Dawes 2017), they have a ‘liminal’ power to disorientate and confuse public debate (Siapera et al. 2018).

Decontextualisation is another simple but effective disinformation tactic (Wardle and Derakhshan 2017). By omitting key explanatory factors, adding textual amendments, or adopting different naming standards, a relatively neutral story can be transformed into one that is imbued with racist or anti-immigrant disinformation (Ekman 2019). In contrast to ‘fake news’ that is entirely fabricated, these disinformation stories contain nuggets of truth that are corroborated by mainstream news sources. As noted, decontextualisation tactics are challenging in the case of immigration because there is an ongoing dispute about how to establish facts and interpret immigration statistics (Ousey and Kubrin 2018).

To appeal to a broader, younger audience, anti-immigrant actors also make extensive use of memes, music videos, jokes, and irony (Beran 2017; Luke 2016; Marwick and Lewis 2017; Nagle 2017). Ekman (2019) outlines how these manipulation tactics result in the gradual normalisation of previously unacceptable utterances: utterances that dehumanise immigrants and even denigrate them as legitimate targets of violence. The participatory culture of digital media is central to the success of this tactic. For example, Marwick and Lewis (2017: 4) found that memes often function as image macros that are engineered to ‘go viral’ by conveying far-right ideology through humour.

As with disinformation generally, automated bots and fake accounts are frequently used to inflate the popularity of anti-immigrant disinformation. Avaaz (2019) investigated far-right disinformation on Facebook ahead of the European Parliament elections. In response, Facebook removed 77 pages and 230 accounts from France, Germany, Italy, Poland, Spain, and the

UK. Facebook estimated that this content reached 32 million people and generated 67 million ‘interactions’ through comments, likes, and shares (Avaaz 2019). Across the countries, fake and duplicate accounts artificially inflated the popularity’ of anti-immigrant disinformation. In some cases, Facebook pages were deceptively branded as lifestyle content to attract followers and then switched abruptly to a focus on immigration.

While platforms have made some moves to counteract extremist content, the European Commission’s Assessment of the Implementation of the Code of Practice on Disinformation1 in May 2020 found that the platforms’ self-regulatory response is beset with a lack of uniform implementation, and, consequently, progress is uneven. It is likely that platforms will come under increasing pressure to address this issue. However, we suggest that addressing disinformation is not simply a matter of targeting the actors who create it and the platforms that facilitate it: audiences are a central part of the equation.

 
Source
< Prev   CONTENTS   Source   Next >