The novelty of today’s government disinformation during war

While disinformation has always been a feature of war, its form and reach have evolved to best fit with prevailing conditions. Nowadays, the particularities of a saturated real-time interactive global media environment work to shape the nature of conflict disinformation, not least because long-standing institutional bulwarks against disinformation have been eroded in the internet age (Lazer et al. 2018). So, too, does the increasingly visible public debate about the threat that such disinformation poses: disinformation is now itself a securitised issue. The prevailing environment has been referred to variously in recent years as the ‘post-truth age’ and an ‘age of political uncertainty’ (Surowiec and Manor 2021). It is an age in which people are increasingly amenable to placing their trust in ‘alternative’ sources rather than in established sources of political and institutional authority (Coleman 2018). Add to this the vast number of commercial and state-funded ‘alternative’ news providers in operation, and the environment in which disinformation is circulated becomes exponentially more complex and multidirectional, with often unclear relationships between the different actors.

One of the most obvious results of this change is an instant ability to bypass the traditional gatekeepers of news and information. It is now the norm for states, defence ministries, embassies, and militaries to engage directly with publics via social media (Crilley 2016; Kuntsman and Stein 2015). Throughout the Syrian conflict, for example, many states and political actors sought to use social media to communicate with different audiences, and Russia’s defence ministry has come under scrutiny for releasing many questionable statements. These included the sharing of a video ostensibly showing the ‘undamaged and fully operational’ marketplace of a town recently reported as being bombed. Russia’s embassy in Syria followed this with a tweet subsequently alleging that the ‘White Helmets’ had faked the bombing, and its account was temporarily suspended by Twitter for violating their rules.

Sometimes, however, disinformation campaigns are more opportunistic. After a controversial US politician insinuated that the 2017 Khan Sheikhoun sarin attack may have been a ‘false flag’ operation, coordinated activity from sources previously linked to Russian information campaigns emerged around the #Syriahoax hashtag (Hindman and Barash 2018, 39—40). Similarly, disinformation is often articulated by actors at one remove: Russia’s international broadcasters, RT and Sputnik, frequently repeat their scepticism of the White Helmets organisation across their broadcast content and multi-platform online outputs, often using ‘independent’ voices such as freelance journalists, academics, and high-profile celebrities to give legitimacy to the charges made (RT 2018b; Sputnik 2018). Such guests tend to disseminate their controversial claims across a particular range of fringe news outlets online, including Dissident Voice, 21st Century Wire, and Alternative View. When presenting their views to Russia’s international broadcasters, it is not uncommon for these guests to engage in complex conspiracy theorising or to project the charge of‘disinformation’ at the opposing side (RT 2018a, 2020; Sputnik 2018). For the media outlets concerned, this offers a degree of plausible deniability: a network that accurately reports what an external figure has said can be represented as fulfilling all its journalistic obligations. If the claim itself contains disinformation, the network can argue that it has merely reported it, rather than endorsing it.

However, this low—entry cost, fragmented media market is vital to understanding how disinformation operates online. This is because an individual’s likelihood of believing a falsehood to which they have been exposed increases in line with repeated exposure (Pennycook et al. 2018), and news consumers are often unmotivated to critically assess the news that they are consuming (Pennycook and Rand 2018). As these processes demonstrate, it is not necessarily obvious whether the states involved generated particular claims or whether state-aligned actors merely reproduce externally circulating disinformation that suits their sponsors’ preferred framing of a conflict. Either way, it appears that audiences having prior awareness of the affiliation and/or intentions of actors like RT does not influence whether the actors’ specific claims have an impact on them (Fisher 2020). What is more, audiences no longer passively consume disinformation but play a role in its production and recirculation. Liking, sharing, or commenting on particular social media posts can disseminate them amongst social networks and also increase their effect due to the ‘implicit endorsement that comes with sharing’ (Lazer et al. 2018, 3).

Perhaps one of the clearest conclusions from this real-time feedback is how conducive affective and emotive representations are to the viral spread of particular stories and claims. That is to say, stories are most likely to spread rapidly when told from an immediate or urgent perspective, such as those of an eye-witness (Anden-Papadopoulos 2013). Furthermore, news markets are structured around the knowledge that users tend to share items presented in an emotive way (Bakir and McStay 2018), even though they may not have read beyond the headline (Dafonte-Gomez 2018), and the business model of most social networks is built upon complex statistical models that predict and maximise audience engagement in order to drive advertising revenue (Bakshy et al. 2015). When it comes to media content about war, it is precisely these affective and emotive stimuli — so easy to combine in online multimedia offerings — to which audiences are most likely to relate (Solomon 2014).

Aside from as eye-witnesses, the role of fellow individuals as media consumers and producers is also crucial in understanding the spread of disinformation about war and conflict today. For instance, fellow citizens are often perceived as more trustworthy or reliable than official sources, and their online comments can influence how subsequent viewers assess particular online artefacts. However, it is difficult to reliably ascertain whether particular sources are themselves state sponsored, independent, or operating in a grey space in between. For example, it is a relatively simple matter to use ‘astroturfing’ techniques to create the impression that comments deriving from managed groups of social media accounts represent ‘ordinary citizens acting independently’. Such comments often disseminate and amplify disinformation. Their impact on political social media discussions beyond their own network of information operatives is debatable (Keller et al. 2020), but there is evidence that their discussions have the power to change individuals’ opinions about some of the issues they discuss, as well as increasing uncertainty about them — even when those individuals are aware that commenters may not be genuine (Zerback et al. 2020).

Today, disinformation efforts in war need not necessarily be convincing in isolation nor stand up particularly well to scrutiny as media research has shown that at crucial times, false stories are more widely shared than accurate news (Silverman 2016) and are widely believed (Silver-man and Singer-Vine 2016). What is more, it can be hard to reverse the overall impact of these rapid reactions, since the mere appearance of fake news and debates about it can set the media agenda (Vargo et al. 2018), yet repeated exposure to falsehoods (even as part of a retraction) has been associated with increased belief in them (Pennycook et al. 2018; Swire et al. 2017). Given these issues, we now turn to discussing new directions for research on government disinformation during war.

 
Source
< Prev   CONTENTS   Source   Next >