Solutions to the problem of mis/disinformation

In light of the very real threat posed by mis/disinformation to democratic governance and the unprecedented scale at which digital media enable it to spread, efforts to effectively identify, limit, and correct it have become increasingly important.

Identifying mis/disinformation

To date, most work has focused on the development of automatic detection techniques to identify problematic content (Cook, Ecker & Lewandowsky 2015). These include classifiers to determine who might share misinformation online (Ghenai & Mejova 2018) and computational tools to detect clickbait, rumours, and bots. Whilst such tools can efficiently filter vast amounts of online content, the machine learning necessary to develop them can be prohibitively time and resource intensive (Tucker et al. 2018). Moreover, their effectiveness is limited by restricted access to social media data, as well as by increasingly effective automation that makes bots difficult to distinguish from real accounts. Even where detection is successful, bot developers soon adapt, and many bot accounts will never be discovered (Sanovich & Stukal 2018). Importantly, computational classifiers are not a replacement for human experts, who must intermittently retrain such tools and be on hand to mitigate any inadvertent discrimination classifiers may perpetuate (Ghenai & Mejova 2018). To this end, there is a risk that such tools will be more effective and precise in countries and on topics that digital platforms prioritise for investment.

Another approach in identifying mis/disinformation online is human content moderation. These efforts are conducted by large teams, employed by social media platforms to monitor and review content in line with their terms of service or community standards (Gillespie 2018). However, such work comes replete with associated mental health difficulties (Chen 2014), blowback from angry users (Newton 2019), and concerns about poor remuneration and the massive volume of content to be checked. Often moderators act based on user reporting, despite users’ tendency to rely on partisanship and ideology' when identifying misleading content (boyd 2017) or, indeed, failing to challenge it at all (Chadwick & Vaccari 2019). Finally, the often implicit assumptions underlying content moderation threaten to undermine the key democratic value of freedom of information (Kaye 2019). That any politically relevant but untrue speech should be censored and that such determinations should be made by quasi-monopolistic private companies arguably raises more problems than it solves.

Tackling tnis/disinformation

Despite the difficulties inherent in isolating problematic content, further decisions must be taken about dealing with mis/disinformation when it is identified. Perhaps the easiest approach for platforms is to take down, downrank, or label such content, adjusting algorithms to promote reliable information, indicating source quality and story veracity, banning offending accounts, and removing harmful bot activity (Lazer et al. 2018). However, this kind of moderation risks bias or error such that legitimate content is accidentally removed, opening platforms up to censorship accusations (Sanovich & Stukal 2018). Moreover, previous attempts at algorithm change have counter-intuitively increased the prevalence of divisive topics (NewsWhip 2019), and there is no guarantee that algorithms cannot be gamed by bad actors. There is also the issue of what to do when disinformation is spread by popular political actors, including incumbent presidents, whom platforms have been more reluctant to police than ordinary users.

Another popular approach in tackling mis/disinformation is fact-checking. However, creating effective fact-checks is resource intensive, and their efficacy can be limited (Marwick 2018). The fractured nature of the online environment makes it difficult for corrections to reach those exposed to mis/disinformation, with problematic content going an average of 10 to 20 hours before fact-checking catches up (Shao et al. 2016). Where corrections do reach their desired audiences, repetition of mis/disinformation as part of the corrective effort merely increases its cognitive fluency — 'the experience of ease or difficulty associated with completing a mental task’ (Oppenheimer 2008, 237) — paradoxically making acceptance of the misleading information more likely (Lazer et al. 2018). Indeed, if mis/disinformation is congruent with one’s worldview, belief in such content is especially likely to persist in the face of corrections (Cook, Ecker & Lewandowsky 2015). Moreover, problematic content often continues to influence attitudes even after corrections have been cognitively accepted (Thorson 2016). Nevertheless, various techniques can increase the effectiveness of correction, including an emphasis on facts (Lewandowsky et al. 2012), avoiding repetition of the mis/disinformation, issuing corrections as soon as possible, avoiding negation, reducing ideological or partisan cues, citing credible sources, using graphics (Nyhan & Reifler 2012), and providing causal alternatives (Thorson

2016) . However, even such carefully crafted corrections have a greater chance of success if they are congruent with their target’s worldview (Lewandowsky et al. 2012). Accordingly, misleading political content is particularly difficult to correct (Walter & Murphy 2018), with journalists finding that their fact-checks are failing to impact large portions of the public (Newman 2020).

An alternative tool in the fight against mis/disinformation is legislation. In Britain, recent high-level reports have called for the independent regulation of social media companies, with legal and financial consequences for failing to protect users against harmful content (Digital, Culture, Media & Sport Committee 2019; Cairncross et al. 2019), although other expert groups have recommended against such measures (Council of Europe 2018; European Commission 2018; Nielsen et al. 2019). There is a fine line between controlling problematic information and compromising free speech, with any government regulation efforts potentially facing criticism for censorship or partiality. The lack of an agreed definition for problematic content (boyd

2017) and, perhaps more importantly, of a shared understanding of the problem across the political spectrum also constitute major obstacles to regulatory solutions.

Other efforts to tackle mis/disinformation have focused on changing journalistic practice. A key concern here is the decline of resources among news organisations, which has led many outlets, especially at the local level, to close or substantially downscale newsrooms. However, many journalists feel their industry should make greater efforts to challenge misleading information (Newman 2020) and attempt to re-establish their reputations — possibly with some help from civil society organisations that are developing training programmes on source verification and responsible reporting (First Draft n.d.). However, efforts to restore public trust in the media may demand more financial and editorial resources than are available (Jukes 2018), and continued efforts to fact check will be perpetually undermined or ignored by partisan actors — without whose self-restraint journalists will always be facing an uphill battle.

Approaches focused on news consumers are also relevant, with civic and digital education efforts seeking to equip social media users against online mis/disinformation. Research suggests that young people are often unable to tell real from misleading news content on social media (McGrew et al. 2017) and that media literacy strategies — for instance, teaching users how to employ fact-checking techniques — could help address this (Wineburg & McGrew 2017). Attempts to inoculate users against mis/disinformation have also shown promise, as ‘fake news’ games have often improved identification of, and resistance to, misleading content (Roozenbeek & van der Linden 2019). Nevertheless, there are concerns that media literacy techniques may be ineffectual (Marwick 2018), with questions about the long-term maintenance of skills and the risk of such efforts reducing trust in news altogether (Lazer et al. 2018). Media literacy is also time and resource intensive and is unlikely to reach or affect all users who need it.

Finally, changes in the way social media works may also be helpful. For instance, creating more friction in the user experience may reduce users’ inclination to mindlessly accept and share mis/ disinformation that fits with their worldview. Research suggests that inducing social media users to think about accuracy, or to think critically, can reduce the likelihood that users trust, like, or share problematic information online (Pennycook et al. 2019). Requiring defence of a decision can trigger accuracy motivations (Kunda 1990), and this in turn encourages successful processing of worldview-incongruent information (Redlawsk 2002). Moreover, simply introducing elements of difficulty or ‘disfluency’ (unease) to a task can interrupt and correct reasoning based on partisan cues (Alter et al. 2007). Indeed, interventions based on disfluency have been found to increase analytic thinking and to reduce belief in conspiracy theories (Swami et al. 2014). If such friction can be introduced into the social media environment, it might reduce users’ propensity to believe and share mis/disinformation, although the question of how to achieve this in practice remains to be answered. This is a momentous task, as the success of social media platforms has largely depended on encouraging fluency, ease of use, and ‘stickiness’ — the ‘ability to attract users, to get them to stay longer, and to make them return again and again’ (Hindman 2018, 2).

 
Source
< Prev   CONTENTS   Source   Next >