Consequences of computational propaganda

The first documented instance of using social bots and ‘astroturf ’ campaigning in political communication on social media dates back to the 2010 US midterm elections (Ratkiewicz et al. 2011). However, researchers are still working to determine concrete societal and electoral effects of political bot communication more narrowly and dis/misinformation and propaganda on social media more broadly. To date, scholarship studying fake news and bot-driven content has devoted considerable attention to highlighting the role automated social media accounts play in disseminating low-credibility content, including false news reports, hoaxes, clickbait headlines, etc. (Shao et al. 2018; Bastos and Farkas 2019). But knowing that many people saw or shared this sort of problematic political content, as Lazer et al. (2018) note, is not the same as knowing if digital disinformation aff ected users’ behavior.

New work in computational propaganda suggests that bots were successful in amplifying negative and inflammatory stories and sentiments during the 2017—2018 anti-government protests in Iran (Thieltges et al. 2018) and the 2017 Catalan referendum in Spain (Stella, Ferrara and De Domenico 2018). Researchers found that this amplification then resulted in increased polarisation amongst dissenting voices. Political polarisation, according to Bernhardt, Krasa, and Polborn (2008), ultimately causes important information to be lost due to media bias. This, in turn, can lead to voters lacking quality information when they go to the polls; thus, making electoral mistakes becomes more likely.

Scholarship exploring audience engagement with fake news during the 2016 US election suggests that the anticipated effects are ‘less than you think’ (Guess, Nagler and Tucker 2019) because ‘persuasion is really hard’ (Little 2018, 50). When evaluating the success of the computational propaganda campaigns during elections, it is important to note that effects may not always be obvious or linear. Second-order effects may be easier to track than first-order ones, indirect ones more viable then those that are direct. Importantly, many bots and other groups working to amplify disinformation do not work to communicate directly with people on social media — they are constructed to trick trending algorithms and journalists into re-curating content to users via existing ‘trusted’sources (Woolley and Guilbeault 2017).

While there has been a consistent effort to study the quantitative breadth of computational propaganda campaigns (Stukal et al. 2019; Ruck et al. 2019; Spangher et al. 2020; Bessi and Ferrara 2016; Shao et al. 2018; Badawy, Ferrara and Lerman 2018), much research still has to be done to examine the persuasive qualities of the groups who generate such campaigns and the consumer demand for those messages.

Theorising the diffusion of propaganda over social media

The rise of social media as crucial tools for information sharing has disrupted the traditional pathways information used to follow while travelling to its intended audience. This inherently changes the way public opinion is formed today. Recent studies detailing news-sharing practices emphasise Twitters ‘crowd phenomenon’ (Kim, Newth and Christen 2014), which makes it a conducive news diffusion space, facilitating immediate and wide audience exposure to information. As Hummel and Huntress (1952) note, this is exactly what propaganda aims for: if it does not reach its intended audience by making people ‘listen, or read, or watch’, it ultimately fails (51).

For the past decade, communications scholars have made a concerted effort to study what makes messages go viral on Twitter (Stieglitz and Dang-Xuan, 2013; Hansen et al. 2011), but until recently, there has not been a holistic approach to explaining why emotionally charged and politically polarised messages diffuse at a higher rate and scale in digital arenas. The mediated skewed diffusion of issues information theory (MSDII), however, works to bridge this gap (McEwan, Carpenter and Hopke 2018).

MSDII provides an alternative theoretical perspective to well-known theories concerning echo chambers (Jamieson and Cappella 2008) and filter bubbles (Pariser 2011). Using exposure as a key metric to determining message diffusion, MSDII suggests that predisposed views (or personal ego involvement with an issue), message quality (the shorter it is, the more factual, less biased and of stronger argument quality it is perceived), and user’s social network ecosystem (the larger the following, the more likely it is to encounter opposing views) each contribute to wider information sharing on social media (McEwan, Carpenter and Hopke 2018).

Ego involvement, or individuals’ determination to express views dear to them while also seeking out and sharing information that supports personal beliefs, is conceptually similar to individuals’ decisions to practice selective exposure in response to like-minded information and selective avoidance when encountering ideologically conflicting views (Stroud 2011). This practice solidifies a ‘web of belief’, as discussed by Quine and Ullian (1978), which allows people to reject or dismiss information based on how well it fits with their social group conventions. MSDII contends, despite the fact that a social media news feed provides opportunities for users to receive ‘attitude-consistent and attitude-inconsistent messages’, ego involvement with an issue still can make it difficult for the networked communities of strong-tied users to ‘accurately access’ the quality of divergent arguments regardless of positive message attributes (McEwan, Carpenter and Hopke 2018, 2). With this in mind, mere exposure to like-minded information only strengthens ideological beliefs and attitudes and has little to do with actually changing them. Increased presence on social media, combined with connections to likeminded individuals, solidifies one’s views and perceptions, turning the cognitive process into a ‘cyclical feedback loop, (McEwan, Carpenter and Hopke 2018, 9), intensified over time through repeated exposure and engagement.

Crucially, MSDII needs further empirical testing of its key propositions. It does, however, provide new insights into how social media, given their technological affordances and embedded network ties, facilitate and extend the reach of computational propaganda once it hits a like-minded audience. It holds exciting potential to contribute to our understanding of why those exposed to computational propaganda could be susceptible to ‘biased argument processing’ (McEwan, Carpenter and Hopke 2018, 4) that contributes to ‘affective and behavioral divide’ across party lines — a phenomenon Iyengar et al. (2019) define as ‘affective polarization’ (134).

 
Source
< Prev   CONTENTS   Source   Next >