Global responses to misinformation and populism

Daniel Funke

Introduction: regulators take aim at misinformation

In November 2017, the European Commission announced that it was enlisting the help of experts across the continent to develop potential ways to curb the spread of false information online. The newly created High-Level Expert Group on Fake News and Online Disinformation was tasked with advising on ‘policy initiatives to counter fake news and disinformation spread online’ (EU Commission 2018). Beginning in January 2018, the experts, representing media companies, non-profit organisations, and academic institutions from around Europe, regularly travelled to Brussels to meet with EU officials, address their concerns about mis- and disinformation, and brainstorm how the commission could best combat the threat. The final result: a report of best practices and recommendations for the commission to consider when drafting any potential anti-misinformation legislation or communication.

That report, published in March 2018, advocates for an inclusive, collaborative approach to addressing misinformation around the world. Among its recommendations are additional financial support for news and fact-checking organisations, calls on technology platforms to share more data, and the creation of a network of research centres studying misinformation across the EU. The report also advises moderation in addressing the challenge of misinformation, recommending the abdication of the term fake news, and — crucially — not creating regulations that penalise the creation or dissemination of mis- and disinformation (Mantzarlis 2018).

While its multilateral effort is perhaps the most sweeping action that a governmental institution has taken against online misinformation in the past few years, it is far from the only one. Since the EU Commission first organised its high-level group on misinformation in early 2018, at least 50 countries around the world have taken legislative action against the spread of online falsehoods. Those actions span five continents and range from laws that criminalise the dissemination of false statements to state-run media literacy efforts (Flamini and Funke 2019). While the intention of each individual action is unique to the socio-political context in which it was crafted, nearly all of them are primarily concerned with how to slow the spread of political mis- and disinformation on the internet in an age when falsehoods could affect elections, international relations, and crisis response.

In this chapter, we will explore several state responses to misinformation around the world, how they’ve been implemented, and how they’ve affected free speech, press freedom, and digital information sharing. We will analyse how the rise of widespread misinformation during the 2016 United States election sparked a global effort to contain the spread of political falsehoods in the form of hard and soft regulations. While some of these efforts are good natured, such a Nigerian initiative aimed at bolstering media literacy, others have co-opted the language of misinformation to persecute journalists. Take, for example, Egypt, where 21 journalists were imprisoned on false news charges in 2019 (Committee to Protect Journalists 2019). As some of these cases illustrate, the threat of online misinformation is big, but the threat of government abuse of power is just as, if not more, concerning.

This chapter will conclude by evaluating a few of the ways that governments have tried to regulate online deception. Ultimately, a successful approach to regulating mis- and disinformation might do something similar to the EU’s 2018 high-level group: involve multiple stakeholders, place an emphasis on the role of the media in calling out falsehoods, teach media literacy, and — above all — adopt a modus operand! of caution and restraint when considering any regulations that tell people what’s true and false.

the rise of a global threat

While false news, conspiracy theories, rumours, hoaxes, and bad information have been around as long as humans have had language, the impact of misinformation on politics came to a head during the 2016 election in the US. And the outcome of that presidential contest inspired governments around the world, either directly or indirectly, to take a rash of actions aimed at countering the spread of political falsehoods online.

The popularisation of online falsehoods has its roots in the 1990s, when internet access became mainstream. The démocratisation of information enabled more people to learn about and participate in politics and current events, but it also provided more room for the proliferation of bad information and misconceptions. The creation of Snopes in the mid-90s came at a time when urban legends and rumours were spreading like wildfire in chain emails, instant messages, and forums (Aspray and Cortada 2019). The abundance of online rumours grew quickly with the advent of social media platforms like Facebook and Twitter, which made it easier for people to share information with their friends and family. By the late 2000s, fact-checking outlets like FactCheck, The Washington Post Fact Checker, and I’olitiFact were regularly debunking online hoaxes in addition to claims from politicians, media pundits, and advocacy groups (Graves 2016).

As social media platforms grew and new ones like Instagram were created, online misinformation expanded. Between the 2012 and 2016 elections, Facebook nearly doubled its number of monthly active users, reaching two billion by the second quarter of 2017 (Statista 2020). Twitter had a similar growth in monthly active users (Statista 2019). Meanwhile, trust in the mainstream media hit a record low in the fall of 2016 (Gallup 2016). Those conditions, as well as the combative rhetoric of presidential candidates Donald Trump and Hillary Clinton and growing partisan divides in the US, provided fertile ground for the proliferation of misinformation online. Falsehoods spread widely on platforms like Facebook and Twitter, partially due to the efforts of Russia’s Internet Research Agency, which fabricated news, faked protests, and created memes aimed at dividing the American electorate and giving Trump a leg up in the election (Howard et al. 2018). Those tactics represented a departure from traditional campaigns, in which political action committees, advocacy organisations, and politicians themselves created most of the political spin (Persily 2017).

They were also effective. By the end of the election, political misinformation was so abundant on social media that it sometimes surpassed the reach of news organisations. A BuzzFeed

News analysis found that, in the final three months of the election, the top-performing fake news stories on Facebook generated more engagement than the top stories from news outlets like the New York Times, the Washington Post and NBC News combined (Silverman 2016). Researchers found that fake news sites disproportionately benefited from social media, with 65 domains getting three times as much traffic from social media as 690 mainstream news outlets (Allcott and Gentzkow 2017). Trump supporters and Facebook users were more likely to read fake news stories than Clinton supporters and people who didn’t use Facebook (Guess et al. 2020).

Still, despite the fact that misinformation had a wide reach in 2016, later research indicated that relatively few Americans actually shared false news (Guess et al. 2019). And, while it’s impossible to say for sure, it’s unlikely that misinformation had a substantive role in electing Trump to the White House (Allcott and Gentzkow 2017). But the damage was done; a December 2016 poll indicated that 64 percent of Americans thought made-up news caused ‘a great deal of confusion’ about current events (Pew 2016). Mainstream news organisations published articles that claimed misinformation on Facebook helped propel Trump to the White House (The Guardian 2016). False news on social media was undoubtedly a problem in 2016; however, based on what we know now, the immediate response to the election seemed to be less about misinformation and more about who won.

Regardless, in the following year, at least five countries and the EU announced measures aimed at combatting the spread of online falsehoods. German prime minister Angela Merkel was among the first to sound the alarm about misinformation after the US election, saying in a November 2016 address to the Bundestag that ‘Today we have fake sites, bots, trolls — things that regenerate themselves, reinforcing opinions with certain algorithms, and we have to learn to deal with them’ (The Washington Post 2016). Merkel’s concern later resulted in Germany’s Netzwerkdurchsetzungsgesetz (NetzDG) legislation, which passed in June 2017. The measure forces technology platforms to remove ‘obviously illegal’ posts within 24 hours or risk fines of up to €50 million (Flamini and Funke 2019).

While NetzDG — one of the earliest actions taken after the 2016 US election to combat the spread of harmful content online — has more to do with hate speech rather than misinformation, it was widely covered by the press as Germany’s answer to the rise of false information (BBC 2018). And it set the groundwork for a slate of global anti-misinformation regulations to come.

 
Source
< Prev   CONTENTS   Source   Next >