False information as a strategy of social movement repression

We identify three types of actors who may use false information to oppose or repress social movements (see Earl 2011 for a review on repression).

State-based domestic repression

When domestic authorities engage movements, larger propaganda repertoires — which mix accurate information, misrepresented information, and disinformation — are called upon. The most successful realisation of social movement repression is quiescence. Decades ago, propaganda and censorship achieved some modicum of quiescence in the USSR, China, and other authoritarian states. While Russia and China used propaganda and censorship somewhat differently (Zhao 2012), they both used state-controlled media to bolster their regimes and censorship to limit unfavourable information. For instance, Munger describes the Chinese approach to Tiananmen:

Early in the occupation of the Square by student protesters, the Chinese regime used the media to promote the narrative that these students were agents of the United States, aiming to undermine China. The students were unable to broadcast their true goal and grievances, and the regimes narrative was unchallenged throughout much of China. Once the regime found it necessary to begin shooting, they switched their media strategy, banning all mention of the protest and its repression.

(Munger et al. 2019, 820)

More generally, Hassanpour (2014) argues that state control over news media may prevent mass dissatisfaction from becoming widespread in times of political unrest. What amounts to staterun media may also exist in nations with democratic elections but authoritarian leanings (e.g. Turkey, Ecuador) and serve a similar function (Walker and Orttung 2014).

Widespread internet usage has complicated this approach. Deibert et al. (2010) posit a ‘three generation’ framework for online propaganda, which interestingly translates historical offline tendencies. First, governments may limit access to the internet entirely (e.g. North Korea) or only in moments of turmoil (e.g. Egypt during the Arab Spring) (Howard et al. 2011). But wholesale restrictions are difficult to maintain economically; targeted censorship represents a second-generation approach, as has occurred in Russia, with government-requested removals of anti-Putin social media posts (Sanovich et al. 2018). Chinese authorities allow criticisms of the state but heavily censor social media support for activism (King et al. 2013). Both first- and second-generation restrictions occur while traditional state-based or ‘statist commercialized’ media (Vartanova 2011) circulate pro-regime messages.

Third-generation strategies, which Deibert et al. (2010) refer to as ‘active engagement’, go on the information offensive (Gunitsky 2015). Unable to censor social movements entirely, governments hope to advance views favourable to the regime, promote seemingly grassroots opposition to social movements and distract from opponents’ messages by offering alternatives. King et al. (2017) estimate the Chinese government uses the so-called ‘50ct army’ to create 448 million social media comments per year that largely ‘involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime’ (484). These posts do not engage with government sceptics or discuss controversial issues. Rather, ‘the goal of this massive secretive operation is ... to distract the public and change the subject’ (King et al. 2017, 484). As with any state propaganda, these efforts may spread disinformation or contested information that is reported as uncontested.

This response can be found in many nations. Munger et al.’s study of the La Salida/antiMaduro movement showed that governments may ‘advance many competing narratives that [address] issues unrelated to the opposition’s criticism’ (2019, 815) in the hopes of flooding social media platforms and diverting attention from information they cannot suppress. In Mexico, so-called ‘Penabots’ promote fake trends in order to distract and drown out government criticism (Porup 2015). Elections and social movements may cross over, as occurred when Pefiabots tried to distract from the emergence of the #YoSoyl32 movement around the 2012 Mexican elections (Trere 2016).

Governments may also drown out messages they oppose by flooding digital spaces with spam. In Mexico, Suarez-Serrato et al. (2016) reported on an influx of spam tweets in response to the #YaMeCanse hashtag protesting government complacency in the disappearance of more than 40 student activists.

Authorities may also directly attack or discredit opponents by ‘mobilizing regime supports to disrupt planned rallies, plant false information, monitor opposition websites, and harass opposition members’ (Gunitsky 2015, 45), drawing on age-old government false information tactics used even in nations with democratically elected governments (Cunningham 2004). During the 2009 Iranian protests, for instance, some Twitter accounts were suspected to be governmentrun in order to mislead the public about protest activities (Cohen 2009). YouTube and Twitter spread protest news, some of which came from inside Iran’s governmental regime, in hopes of paving the way for arresting opposition leaders (Esfandiari 2010). Likewise, following the Gezi protests, the Turkish government hired social media experts to boost pro-Erdogan accounts and spread disinformation (Medieros 2014).

Regime campaigns can be quite sophisticated. Keller et al. (2017) show a division of tasks amongst different state actors promoting propaganda, including groups that used their bots to amplify favoured news or social media posts (generated from within and from real external users) while others attacked opponents. State-orchestrated propaganda can also be designed to appear as if it is civic in origin (Keller et al. 2017). As with the astroturf movements discussed later in this chapter, even when information is not false, an element of deceit may come from making social media activity appear as if it is grassroots when it is not.

The effectiveness of these campaigns, though, is unclear (Gunitsky 2015). Fominaya (2011), studying protests in Spain that erupted when the Spanish government was thought to have spread disinformation about the 11-M terrorist bombings in Madrid, shows that disinformation can backfire. Moreover, movements can co-opt these tactics, as leaders of the Egyptian Uprising of 2011 did by posting disinformation about protests to evade Egyptian police (Kirkpatrick 2011).

 
Source
< Prev   CONTENTS   Source   Next >