Information disorder practices in/by contemporary Russia

Svetlana S. Bodrunova

Introduction: the ‘Russian trolls’ and academic research

Research on modern ‘Russian propaganda’ has been rapidly growing in recent years. It has examined information disorder activities (Wardle and Derakhshan 2017) organised, allegedly, by actors formally or informally affiliated with the Russian state and mostly directed at foreign populations. The focus has been on the study of the activities of R.T (formerly Russia Today), the state-owned corporation oriented to foreign audiences, and detecting and proving the existence (and, rarely, assessing the impact) of online computational propaganda tools (Woolley and Howard 2018), including hacking, bot activity, trolling, and mixed-media propagation of fake and misleading information.

The first line in this research comprises investigative reports by US and European defence institutions, parliamentary groups, and think tanks, as well as detailed accounts based on open sources (see Lysenko and Brooks 2018), within the framework of international cyberwar. The main goals are to identify and prove organised efforts, define the extent of threat to national security, and suggest counteraction strategies. Scholars have studied alleged Russian disinformation campaigns on social networking sites through a combination of social network analysis and textual methods, with varying degree of difference between trolls/bots and random users. Also, Twitter has released datasets of at least 3.800 accounts altogether (without, though, publishing the methodology for such identification); these datasets were used for machine learning and journalistic investigations by NBC News and CNN. Other platforms like Facebook and Tumblr also announced blockages of the accounts identified as trolls/bots linked to Russia.

Several research findings are echoed in government investigations such as the Report on the Investigation into Russian Interference in the 2016 Presidential Election, known as Mueller Report (2019), and the subsequent indictment by the grand jury of the Court of the District of Columbia (www.justice.gov/file/1035477/download). The indictment states that 13 people of Russian citizenship were engaged in activities that breached the US FECA and FARA Acts related to federal elections and registration of foreign agents. The described actions went further than pro-Russian or pro-Trump posting: they included, as was stated, using stolen identities and spending money on political advertising without registering. Unlike the Mueller report, the indictment did not openly relate these activities to the Russian authorities — who have also repeatedly denied any relation to them. Despite this, the indictment has led to widening the list of US sanctions against Russian individuals.

A second line of research has approached the subject by foregrounding questions about democratic communication. Karpan and co-authors (2019) closely examined the organisational routines of troll factories in Russia (better known as ‘Olgino trolls’), China, and the Philippines, including an attempt aimed at ‘finding an online army’. Other studies, instead, argue that there is no simple technique capable of detecting a troll (unlike a bot). Today’s sophisticated work of potential trolls is often based on bias, spin (Linvill and Warren 2020), and subversive tactics of justifying their upsetting and threatening statements by characterization of reality as upsetting and threatening’ (Wijermars and Lehtisaari 2019: 233). This makes trolls hardly distinguishable from ordinary users expressing discontent.

Within this line of research, here 1 follow a regretfully small number of works that offer a more nuanced view of the distortions of Russian-language communication. Kazakov and Hutchings (2019) intelligently point out the ‘questionable assumption that information war is invariably a one-sided affair — Kremlin-initiated activities to be “countered” by Western democracies’. In their view, the argument about a line of transmission of misleading messages ‘from “media outlets” through “force multipliers” to “reinforcing entities’” reconstructed “in the numerous intelligence-led reports” (ibid.) neglects the underlining instability and fluidity of the communities at which potential disinformation is targeted and genuine grassroots user participation in the ideological Russia-West clash. In his comprehensive reconstruction of contemporary studies of Russia-linked computational propaganda, Sanovich (2017) shows that disinformation directed both outside and inside the country has been, quite unexpectedly, linked to competition for influence within the political elite and has had a non-systemic, trial-and-error nature. Thus, the post-Soviet stereotypes that tell of wide-scale, monolithic, and well-coordinated state propagandistic efforts might be misleadingly simplistic when used to assess today’s situation.

I would add further thoughts to reconsider the dominant cyberwar paradigm prone to focus on external dimensions of Russian propaganda that omits other important dimensions.

First, academic research does not aim to (dis)prove the linkages of disinformation activities, even if identified, to particular people or organisations; ultimately, that is done by courts and international organisations. But, on the other hand, without such proof within academic texts, the researchers’ speculations on ‘propaganda machines’ remain allegations. Of the over 40 academic papers on Russian disinformation 1 have reviewed, none have contained airtight, solid proof of the linkage of disinformation efforts to particular government authorities in Russia. Actually, it would be surprising if we could find any: searching for proof would go against the research designs and, in general, turn science into investigation. It might endanger the domestic scholars as well, especially in countries with the regimes more restrictive than that in Russia.

Second, one needs to remember that it is way too easy to view pro-Russian information activity as organised and state-induced. One example comes from the same review by Sanovich (2017: 9). He states that Berkman Center at Harvard (Barash and Kelly 2012) detected the first large-scale use of pro-government bots and trolls in 2012 while the paper itself never mentions bots or trolls and states that ‘committed set of users may use the pro-government hashtag . . . perhaps in an organizational or mobilizing capacity’ as a possible alternative explanation of the density of pro-governmental clusters on Russian Twitter (ibid.: 10). Taken together, these considerations show that the scholarly community needs to better elaborate whether, and how exactly, we should incorporate data on disinformation into academic research.

Third, most of the reviewed research assumes but does not demonstrate that disinformation efforts had impact. Actual measurements of impact are rare. Just a handful of works set out to measure impact of trolling on Twitter, by analysing exposure to the discovered trolls/bots, be it by ordinary users with left/right leanings (Badawy, Ferrara, Lerman 2018; Spangher et al. 2018) or by journalists (Im et al. 2019), or the ability to disseminate links to various platforms (Zannettou et al. 2019). Exposure estimates may, indeed, be impressive: tweets plus retweets by spreaders of Russia-linked content could reach 12 million during the 2016 elections (Badawy, Ferrara, Lerman 2018: 262). However, the high impact of trolls/bots on behaviors such as voting and protesting remains largely undocumented. One exception (Zerback, Toepfl, Knoepfle 2020) found short-term impact of exposure on expressed views.

Fourth, there is not enough proof to attribute all misinformation in the 2016 US elections to Russia or to argue that it is widespread in Western public opinion. A widely cited technical report by the European Commission (EC) on selective exposure to fake news during the elections (Guess, Nyhan, Reifler 2018) does not link fake news websites to Russia, while it mentions US-based sites such as the ultra-right Breitbart and the satirical The Daily Currant. Figures on impact are uneven. Whereas the EC report claimed that approximately one in four Americans visited a fake news website during the 2016 elections, a study of electoral Twitter in Michigan by Howard and colleagues (2017) attributed only 1.8 percent of junk/bot news to Russian sources, and another study (Im et al. 2019) stated that a meagre 2.6 percent of US journalists on Twitter were reached by the suspect accounts. These findings may leave room for claiming that, even if the Russian effort existed, its impact was negligible and that ‘the studies .. . do not add any substance to allegations of Kremlin culpability’ (Martin 2017).

Fifth, research on Russian computational propaganda, as my colleagues and 1 have stated elsewhere (Koltsova and Bodrunova 2019), overshadows both the multi-faceted communication processes evolving in Russia since the early 2000s and their wider political, societal, and historical causes. Going beyond big data studies and ‘engaging] with the forms of power and knowledge that produce’ computational propaganda (Bolsover and Howard 2017) are not enough. We need to place the disinformation-oriented efforts into contexts that provide explanations and propose solutions. Shedding light on the communicative climate in the Russia of the 2010s would help expand the Russian disinformation studies by shifting the dominant focus from cyberwarfare to how disinformation and misinformation permeated domestic Russian communication, in which organised pro-establishment efforts only played a part. Next, we reconstruct the structural features of the Russian public sphere that have prevented efficient strategies for addressing the growing wave of disinformation practices.

 
Source
< Prev   CONTENTS   Source   Next >