A historical overview of propaganda

The rise of organised social media campaigns in recent years led scholars to advocate for the continued relevance of propaganda studies (Jack 2019; Woolley and Howard 2016), which may inform the current debates on the propagating scale of information warfare. Classical scholarship defines propaganda as a form of deliberately organised manipulation of public opinion. It primarily manifests a small group of institutional actors exerting power over a larger populace, striving to ‘activate people’, ‘arouse questions’, and ‘lead to critical reactions’ (Hemanus 1974, 215).

In manipulating public opinion, propagandists aim to convert suggestions into ‘strong active beliefs’, triggered and reinforced through past memories (Moscovici 1993, 73). Through the

‘manipulation of symbols’ (Hemanus 1974, 215), propaganda shapes peoples thoughts, beliefs, and actions by creating ‘pictures in our heads’ or igniting stereotypes (Lippmann 1922). Propaganda escalates emotions of ‘hope or fear’, which lead audiences to internalise information without scrutiny and deliberation (Lee 1952, 62). If the persuasion is successful, Ellul (1973, 166) contends that the suggested notions seamlessly transform into ‘prejudices and beliefs, as well as objective justifications’, representing new notions of reality. Therefore, classic scholarship suggests that propaganda does not just manipulate the mind; it can also cause other forms of distress as it alienates those who spread and consume it — subtly and slowly making them give up beliefs and integrity to obey someone else (Ellul 1973).

Media (and therefore, uses of propaganda) have changed rapidly over the past century since Lasswell’s (1927) pioneering scholarship in the fields of political communication and media studies. As of 2019, 43 percent of US adults consumed news via Facebook while 12 percent relied on Twitter (Pew Research Center 2019). The networked architecture of social media commands the information flow, algorithmically warranting certain messages wider exposure and exhibiting potential for new and hyper-specific types of audience engagement (Van Dijck 2012). Jack (2019) accounts for the uncertainty of how today’s active audiences, imbued with the power of social sharing, may inadvertently be contributing to a wider spread of dis/misinformation online. By introducing a framework of ‘wicked content’, Jack (2019) calls for treating propaganda as ‘a sensitizing concept’, which also includes fake news (448). She suggests that the complementary concept will retain room for ‘ambiguities of meaning, identity, and motivation, along with unintentional amplification and inadvertent legitimization’ of problematic digital content circulating in an ever-more-complex media ecosystem (449).

Computational propaganda: a global overview

Computational propaganda employs automated and algorithmic methods to spread and amplify messages on social media coupled with the overt propaganda tactics of ideological control and manipulation (Woolley and Howard 2018). Unlike early information systems for propaganda, discussed by pioneer researchers (Ellul 1973; Lee 1952), the current form operates via ‘the assemblage of social media platforms, autonomous agents, and big data’ (Woolley and Howard 2016, 4886). It can rely upon political bots or automated computer code built to mimic human users and push out targeted political messages. Content that begins with bots is often immediately spread across a wide network of high-volume users, including politicians, influencers, and pundits (Woolley and Guilbeault 2017).

Computational propaganda is an international phenomenon. Russian global digital machinations well display this. Brexit was ‘a petri dish’ used to test and gear up for digital information campaigns during the 2016 US election (Mayer 2019). There is a growing body of research detailing the ways in which computational propaganda was strategically leveraged across several government types: by oppressive political regimes, including in China (Yang 2017), Hong Kong (Zhong, Meyers Lee and Wu 2019), Iran (FireEye Intelligence 2018), Saudi Arabia (Benner et al. 2018), Venezuela (Forelle et al. 2015) and by embattled democracies such as Brazil (Arnaudo 2017), Turkey (Akin Unver 2019), and the Philippines (Bengali and Halper 2019).

Political groups in the United States and other Western democracies, many of which face issues of growing polarisation and illiberalism, have employed similar digital tactics in efforts to shape public opinion in their favour. Various studies on Twitter traffic and the 2016 US election indicate that bots have accounted for anywhere from 8 to 14 percent of active accounts (at a given time), and they were responsible for generating from nearly one-fifth to one-third of the overall Twitter traffic surrounding the event (Shao et al. 2018; Bessi and Ferrara 2016). In Italy, bots have been used to deliberately target influencers during political events by enticing them to share misleading information (Stella, Ferrara and De Domenico 2018). Respected media outlets in Western democracies are also targets of digital disinformation campaigns, many amplified by bots. Lukito et al. (2018) found that a total of 116 stories from 32 major news outlets, including The Washington Post, National Public Radio, USA Today, The New York Tinies, and other major outlets embedded at least one IRA tweet as a source, often showcasing an account expressing strong partisan beliefs. Bots, largely, spread and amplify messages from low-credibility sources (Shao et al. 2018), and their conversations rarely leave the confines of social media platforms; thus, being quoted by the mainstream media could be attributed to second-order indirect effects, influencing public opinion.

 
Source
< Prev   CONTENTS   Source   Next >