II. Media misinformation and disinformation

The evolution of computational propaganda: theories, debates, and innovation of the Russian model

Dariya Tsyrenzhapova and Samuel C. Woolley

Acknowledgements

For their help and feedback, the authors would like to acknowledge colleagues in the School of Journalism and Media and the Center for Media Engagement (CME), both at the University of Texas at Austin. For their generous support of the Propaganda Research Team at CME, we would like to thank the Beneficial Technology team at Omidyar Network, the Open Society Foundations-US, and the Knight Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of these funders. Please direct correspondence to the School of Journalism and Media, Belo Center for Media, 300 W Dean Keeton St, Austin, TX 78712.

Introduction

Russia’s digital interference during the 2016 US presidential election and the UK Brexit referendum helped catalyse broadscale international debates about the global spread of political manipulation online (Mueller 2019; U.S. Senate Foreign Relations Committee 2018). During the Arab Spring protests, Facebook and Twitter were celebrated for their power to mobilise crowds for social movements (Tufekci and Wilson 2012). The recent outbreak of dis/misinformation and inorganic political communication campaigns has, however, led many to argue that social media are now vessels for ‘new and powerful forms of disguised propaganda’ (Farkas 2018, 6), as well as channels for ‘organized and coordinated’ public opinion manipulation (Bradshaw and Howard 2017, 4).

But computational propaganda campaigns on social media are not simply powerful and organised; they are massive, far-reaching, and international. The people who wage them make use of legitimate advertising tools on sites like Facebook as well as illicit tactics, including fake profiles and social media bots — pieces of software algorithmically programmed to spread propaganda messages (Woolley and Howard 2016). Russian efforts during the 2016 US election underscore these features.

During the investigation of US special counsel Robert S. Mueller III, Twitter identified nearly 4,000 accounts associated with Russia’s Internet Research Agency (IRA) and over 50,000 automated accounts linked to the Russian government (Mueller 2019). All told, in the ten weeks prior to the 2016 US election, 1.4 million people directly engaged with these tweets by acts of quoting, liking, replying, or following (Twitter Public Policy 2018). In a similar effort, Facebook disabled 5.8 million fake accounts. This included purging 120 IRA-linked pages, which posted 80,000 pieces of content between June 2015 and August 2017, according to Facebook general counsel Cohn Stretch. Facebook also discovered that the IRA had disseminated more than 3,500 ads through a number of groups: ‘Stop All Immigrants’, ‘Black Matters’, ‘LGBT United’, ‘United Muslims of America’, etc. These ad campaigns cost the IRA as little as $100,000 (Mueller 2019), only a tiny fraction of its $125 million monthly budget in the run-up to the US election, {United States of America v Internet Research Agency LLC2018) and reached almost one in three Americans, or 126 million US-based users (U.S. Senate Committee on Crime and Terrorism 2017).

The Russian propaganda machine has been successful in harnessing social media as platforms for stimulating social and political unrest — particularly in pushing polarisation, apathy, and disbelief both online (Bastos and Farkas 2019; Bessi and Ferrara 2016) and offline (Allbright 2017). The government in Moscow and many other powerful political actors well understand Manuel Castells s premise surrounding information warfare in a network society: ‘torturing bodies is less effective than shaping minds’ (Castells 2007, 238).

In this chapter we focus on the global diffusion of the Russian computational propaganda model in order to highlight broader changes in the use of social media in attempts to deceptively alter the flow of information during high-stakes political events. Drawing on literature from propaganda studies, reflexive control theory (RC), and information diffusion, we conceptualise how false news messages, built with the intention of igniting the flame of social disagreement, can be harnessed in order to mobilise social movements globally.

There is a clear and continued need for more eftects-driven research in computational propaganda. We point to several pieces of work that provide theoretical insight into how behavioural changes may occur as a result of an audience’s exposure to disinformation online. As Bernhardt, Krasa and Polborn (2008) note, the residual effects of this manipulation can include affective polarisation and media bias, which lead to electoral mistakes. In response to targeted criticisms about the lack of demonstrable effects of computational propaganda on behavior at the voting booth (Metaxas, Mustafaraj and Gayo-Avello 2011), we include a preliminary discussion on the difficulties of measuring first-order (direct) socio-political effects in the era of digital dis/misinformation. We join other scholars in calling for more computational propaganda and digital dis/misinformation research exploring the measurement of second-order (indirect) behavioural changes.

 
Source
< Prev   CONTENTS   Source   Next >