How states responded to misinformation
Since the 2016 US election, countries around the world have become increasingly concerned with how to restrict the flow of falsehoods on the internet. This concern is typically amplified during elections and other political events, when the threat of media manipulation and online disinformation is high (lives et al. 2020). Some authoritarian countries, such as China, have long had regulations that penalise citizens who share internet rumours, but the rise of widespread misinformation over the past few years has motivated a variety of different states to take similar steps. While some countries, such as Egypt and Russia, have opted to criminalise the dissemination of falsehoods online, others, such as Belgium and Nigeria, have favoured softer regulations, including initiatives aimed at bolstering media literacy (Flamini and Funke 2019).
In this section, we will analyse some of these different state actions and how they’ve affected information sharing online. The section draws heavily from the Poynter Institute’s ‘guide to anti-misinformation actions around the world’, one of the most comprehensive sources of information about global state actions against misinformation. As of its August 2019 update, the guide had catalogued 98 individual anti-misinformation actions in 50 countries. The actions are
• Law Q Media literacy • Bill • Internet shutdowns Law enforcement 9 Failed legislation
A Proposal ( ' Task force • Report • Investigation O Threats Q Court ruling
Figure 41.1 Countries that have taken action against online mis- and disinformation
divided into 12 different categories: laws, media literacy, bills, internet shutdowns, law enforcement, failed legislation, proposals, task forces, reports, investigations, threats, and court rulings (Flamini and Funke 2019).
This section will not cover each action in Poynter’s guide but instead will analyse a few examples of each type of action that exemplify how different countries have approached regulating online mis- and disinformation. By applying the concepts of hard and soft power in international relations, these actions will be divided into hard regulations, those that necessitate state enforcement, and soft regulations, which are more focused on raising awareness, coalition building, or providing aid to citizens. The section will place these different kinds of actions in context, both chronological and geopolitical, to demonstrate how responses to online deception are rarely developed in a vacuum. Taken together, these responses present a clear picture of how some states could potentially address misinformation — and how others could use the problem to abuse their power and restrict free speech and the press.
A great deal of news coverage about state responses to misinformation since 2016 has focused on legislation. And for good reason — bills and laws, as well as law enforcement and internet shutdowns, are among the most stringent actions that states can take. This subsection will discuss how some of these hard regulations have been aimed at restricting the spread of hate speech while others specifically criminalise the creation and dissemination of misinformation. Meanwhile, other countries have resorted to shutting oft the internet altogether or detaining purveyors of false information, actions that press freedom groups have condemned (Flamini and Funke 2019).
One of the first pieces of legislation that followed Germany’s NetzDG law came in December 2017 in Ireland. Lawmakers proposed a bill that would have criminalised the use of automated social media accounts to create 25 or more personas that spread political messages online, a tactic that Russian operatives employed during the 2016 US election (Howard et al. 2018). One month later, Croatian lawmakers introduced a bill aimed at limiting the spread of harmful content on Facebook. The legislation was similar to NetzDG in that it was more focused on restricting the spread of hate speech than sanctioning misinformation (Flamini and Funke 2019). Similarly, Ethiopian lawmakers passed a law in February 2020 that criminalises hate speech content that’s published on social media to more than 5,000 followers. The move came months ahead of the country’s August election and in the midst of ongoing ethnic violence (The Seattle Times 2020).
The blurring of lines between hate speech and mis- and disinformation is one of the many pitfalls of state responses to online falsehoods. While the former term is classified as malinformation since it is not necessarily false or misleading but rather intended to cause harm, the latter two must, by definition, describe content that is false, misleading, manipulated, or fabricated. While both hate speech and misinformation can cause harm on the internet, they are fundamentally different concepts (Derakhshan and Wardle 2017). However, lawmakers have tended to clump the two together in legislative proposals, leading to false expectations for laws like NetzDG. In general, passing regulation that criminalises the spread of false information online is more rare in democracies than in authoritarian states, perhaps because there are typically more legislative hoops to jump through and citizens have more agency to push back against restrictions on their free speech rights (Flamini and Funke 2019).
Still, there are plenty of examples of bills that do explicitly aim to regulate the spread of mis- and disinformation. In May 2018, there were 20 draft bills in the Brazilian Congress that focused on limiting the spread of online falsehoods ahead of the October election. The penalties ranged from fines starting around $400 to up to eight years in prison, and the bills covered everything from spreading fake news stories on social media to publishing inaccurate stories in the press (Flamini and Funke 2019). The bills attracted fierce criticism from press freedom groups like Freedom House and the Electronic Frontier Foundation, which said drafting legislation against misinformation is ‘a risky path to address complex problems, especially when it involves deciding what is true or false’ (Freedom House 2019b; Electronic Freedom Foundation 2019). Other countries pressed forward, with similar anti-misinformation bills proposed in Taiwan and the Philippines later in 2018. However, these kinds of bills tend to stall in democracies, with the exception of France. In November 2018, lawmakers passed legislation that gives officials the authority to remove fake content from social media and block the websites that spread it in the final three months of an election campaign (Flamini and Funke 2019). The law exemplifies how, when democratic countries do take legislative action against misinformation, the goal is usually to preserve the integrity of elections.
Nevertheless, such laws are more common in regimes classified as ‘partly free’ or ‘not free’ by Freedom House. In April 2018, Malaysia made it illegal to share online misinformation, becoming the first Southeast Asian country to do so (Flamini and Funke 2019). The law, which was repealed in December 2019, criminalised the publication and dissemination of false news, punishable by up to six years in jail and a fine of $128,000, and made online service providers responsible for third-party content on their platforms (The Star 2019). The next month,
Kenyan president Uhuru Kenyatta signed a bill that criminalises 17 different types of cybercrimes, including misinformation. It also punishes those who share or create misinformation with fines and jail time. More recently, Singapore passed a law in May 2019 that makes it illegal to spread ‘false statements of fact’ that compromise security, ‘public tranquility’, public safety, or international relations. Singapore’s legislation imposes fines and jail time for violators and increases sanctions for those who use an ‘online account or a bot’ to spread misinformation. Like Kenya’s law, it also holds internet platforms liable for their role in spreading falsehoods (Flamini and Funke 2019).
Advocacy organisations have criticised those anti-misinformation laws for their potential to infringe on press freedom, but other laws more explicitly target the media (Committee to Protect Journalists, May 2019). In July 2018, Russian lawmakers introduced a bill, which passed in March 2019, that bans ‘unreliable socially-important information’ that could ‘endanger lives and public health, raise the threat of massive violation of public security and order or impede functioning of transport and social infrastructure, energy and communication facilities and banks’ (USA Today 2019). The law gives the government the power to block websites that prosecutors say are in violation of the rules, which includes those that publish information that ‘disrespects’ the state (The Washington Post 2019). While it’s unclear to what extent that law has been implemented, a similar measure in Egypt has served as the foundation for the arrest of dozens of journalists since it was passed in July 2018. The law deems any account or blog with more than 5,000 followers a media outlet which can be prosecuted for publishing ‘fake news’. Egypt has jailed the most journalists on false news charges since 2018, and media rights organisations say the state’s law is being used to silence coverage that’s critical of the government (Committee to Protect Journalists 2019). Other countries have fallen into a similar pattern, with countries like Cameroon, Indonesia, and Myanmar all arresting journalists on false news charges since early 2018. Kazakhstan and Turkey have also conducted investigations of news outlets that allegedly published false information (Flamini and Funke 2019).
Meanwhile, some anti-misinformation legislation focuses on punishing politicians instead of citizens, although this is comparatively rare. A bill proposed in February 2019 in Chile would have imposed penalties on politicians who participate in the ‘dissemination, promotion or financing of false news’. While technically an example of a law enforcement action, not new legislation, a Côte d’Ivoire minister was imprisoned in January 2019 on ‘false news’charges after tweeting about how a state prosecutor had arrested another MP. The arrest was made based on a state law that punishes the creation of‘false news’, which has been used to jail journalists in the past (Flamini and Funke 2019). That kind of tactic, using anti—false news laws to justify the imprisonment of journalists, is common in regimes that Freedom House deems to be ‘partly free’ or ‘not free’.
Legislation isn’t the only kind of hard regulation that states have used to combat misinformation. Take India, for example, where the government regularly shuts oft the internet to stem the spread of falsehoods during news events. In 2018, there were 134 internet shutdowns in the country — 47 percent of which took place in the politically tumultuous states of Jammu and Kashmir (Taye 2018; Rydzak 2019). In 2019, the number of internet shutdowns decreased slightly to 121, but India still led the world in incidents by far; Venezuela had the second-highest number of shutdowns with 12 (Taye 2019). One potential reason for India’s proclivity for turning off the internet is the fact that the country has had a rash of misinformation-related killings over the past several years (Wired 2018). The trend has spread to neighbouring countries like Sri Lanka, where the government implemented internet shutdowns during the April 2019 terrorist attacks on Easter Sunday. But several media outlets cast doubt on whether it worked, reporting that misinformation still circulated in spite of the shutdown, partly due to the widespread use ofvirtual private networks (BuzzFeed News 2019). Those dispatches call into question the efficacy of hard regulations against online misinformation, even when they’re enforced.
While hard regulations are the most eye catching and the most common, they are not the only actions that states have taken to counter online mis- and disinformation in recent years. Several states, particularly those in Western Europe, have instead opted to adopt media literacy initiatives, publish reports, or create task forces aimed at improving public understanding of the threat posed by online falsehoods (Flamini and Funke 2019). While many countries have their own media literacy initiatives and proposals for dealing with misinformation, this subsection will focus on efforts launched since early 2018.
One of the most popular soft regulations states have initiated in recent years is the creation of task forces that address the threat of foreign disinformation campaigns. This chapter previously discussed how Russian influence operations promoted false content on social media in an attempt to affect the outcome of the 2016 US election. Many of the anti-disinformation task forces set up since early 2018 can be viewed as a reaction to that threat. Spain, for example, entered into a pact with Russia in November 2018 that explicitly prevents the two nations from using disinformation to affect each others elections. The move came after Spanish ministers accused Russia of spreading disinformation about the Catalan referendum. In June 2018, Australia created a task force specifically charged with monitoring potential foreign influence operations ahead of elections. Lawmakers also announced a media literacy campaign called Stop and Consider, which encouraged voters to pay attention to the information they were sharing online. Meanwhile, some countries have set up task forces that are less focused on foreign interference, such as a state-run WhatsApp account in the Democratic Republic of the Congo that was created to field misinformation about Ebola. Similar mis- and disinformation monitoring systems have been created in Mexico, Oman, and Pakistan (Flamini and Funke 2019).
In democracies, media literacy provisions like Australia’s have become especially popular. Take Canada, for example, which, in January 2019, announced a multi-pronged antimisinformation effort ahead of its fall elections. The government gave $7 million to projects aimed at increasing public awareness of mis- and disinformation, ranging from awareness sessions and workshops to the development of learning materials for citizens (Department of Canadian Heritage 2019). Nigeria has taken a more direct approach by planning collaborations with media organisations and government agencies to teach citizens what’s true and false on the internet — a move that has been met with praise from media literacy advocacy organisations (Flamini and Funke 2019; Media Literacy Now 2019). Media literacy initiatives are particularly popular in Western Europe, where Belgium, the Netherlands, and Sweden have published materials aimed at informing the public about misinformation. But such efforts have also taken place at local levels of government in countries like the US, where at least 24 states have tried to pass legislation aimed at funding or creating media literacy programmes. One example is Massachusetts, which passed a law in 2018 that mandates civic education with an emphasis on media literacy (Flamini and Funke 2019).
Meanwhile, other countries have taken a research-centric approach to developing antimisinformation actions. In addition to creating an anti—foreign disinformation task force and public school curricula aimed at teaching students how to tell fact from fiction on the internet, a United Kingdom parliamentary committee published a report in July 2018 with several recommendations for government interventions. The recommendations include rejecting the term ‘fake news’, regulating online media like traditional media, and creating a working group to research the spread of misinformation. That’s similar to the approach taken by the EU, which published a report about online misinformation based on the recommendations of a group of experts from across the continent (Flamini and Funke 2019). While those kinds of actions are less binding than task forces, and certainly less binding than hard regulations like laws or internet shutdowns, they are among the most multi-lateral, measured responses a state can take to combat online misinformation.