V. Responses to misinformation, disinformation, and populism
Legal and regulatory responses to misinformation and populism
Alison Harcourt
Introduction
According to Google Trends, the frequency of misinformation has risen significantly since the 2016 US presidential elections (Figure 41.1). Disinformation and misinformation are not new phenomena but, as many authors explain, as old as time. However, distribution has been propagated by social media with most sharing taking place on Facebook (Marchal et al., 2019:2). This has been furthered by the practice of astroturfing and the creation of bots (Bernal, 2018:242; Marsden and Meyer, 2019). Bastos and Mercea found that a ‘network of Twitterbots comprising 13,493 accounts that tweeted the United Kingdom European Union membership referendum, only to disappear from Twitter shortly after the ballot’ (2017:1). Social media manipulation is on the increase globally, particularly in relation to state-engineered interference (Bradshaw and Howard, 2019). Many authors point to ownership structures and the lack of transparency and accountability in the media, coupled with a lack of sustainability of journalism and lack of trust in the media, for the exacerbation of disinformation and ‘fake news’ inquiry.
Stakeholders have called on governments and the European Union to take action. Two states, Germany and France, have introduced laws to tackle disinformation. Germany introduced its Netzu/erkdurchsetzungsgesetz' law in 2017 (Deutscher Bundestag, 2017), which polices social media websites following a number of high-profile national court cases concerning fake news and the spread of racist material. It enables the reporting and take-down of online content. France passed a law on the manipulation of information2 in 2018, which similarly obliges social media networks to take down content upon request by a judge. Candidates and political parties can also appeal to a judge to stem the spread of‘false information’ under the control or influence of state foreign media during elections. The UK has taken a more self-regulative approach after a UK Commons committee investigation into fake news by the Digital, Culture, Media and Sport Committee concluded that Facebook’s founder Mark Zuckerberg failed to show ‘leadership or personal responsibility’ over fake news (2017). The two-part inquiry focused on the business practices of Facebook, particularly in response to the Cambridge Analytica scandal. The UK’s resulting 2019 Online Harms White Paper3 proposes a self-regulatory framework under which firms should take responsibility for user safety under duty of care. The European Union (EU) has flanked national efforts with a 2016
125 i

2004-01 2006-02 2008-03 2010-04 2012-05 2014-06 2016-07
Figure 40.1 The Frequency of‘fake news’ in Google Trends (2004—2018)
Source: Google Trends cited in Martens et al. (2018:8)
Conduct on Countering Illegal Hate Speech Online and 2018 action plan. This culminated in codes of practice in 2018 and 2019, which have been voluntarily adopted by social media platforms and news associations.
This chapter will outline different responses in case studies on Germany, France, and the UK. This is explained through existing legal instruments — namely, hate speech and strong privacy laws and right of reply in France and Germany (Richter, 2018; Heldt, 2019; Katsirea, 2019) — whereas the UK takes a more self-regulated approach as supported by recent case law (Craufurd-Smith, 2019; Woods, 2019).
National definitions
Misinformation is defined differently in different national contexts. The French law refers to ‘nouvelles fausses’ (false information) in reference to Article 27 of the 1881 French Press Law4 and French Electoral Code.’ This was based upon recommendations from the Conseil d’Etat (Council of State)6 for reasons of conformity with existing laws and judicial review (Craufurd-Smith, 2019:56). The German NetzDG refers to ‘unlawful content’ (‘Rechts-wichige Infinite’) as defined under provisions in the German criminal code Strafgesetzbnch (StGB), which includes insult (§185), defamation (§186), intentional defamation (§187),7 public incitement to crime (§111), incitement to hatred (§130), and dissemination of depictions of violence (§131). The law also stipulates that social networks define ‘hate speech' within their terms of service.8
The UK 2019 Online Harms White Paper uses the word disinformation, which is defined as being ‘created or disseminated with the deliberate intent to mislead; this could be to cause harm, or for personal, political or financial gain’. The white paper also refers to the Digital, Culture, Media and Sport (DCMS) Committee Commons Select Committee definition of disinformation: ‘the deliberate creation and sharing of false and/or manipulated information that is intended to deceive and mislead audiences, either for the purposes of causing harm, or for political, personal or financial gain’ (2018, 2019).9
The European Commission’s ‘high-level group on fake news and online disinformation’ also uses the term disinformation, which is defined as ‘all forms of false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit’ (European Commission, 2018:1). The term disinformation was adopted, in turn, by the United Nations, OSCE, OAS, and AU (OSCE, 2017) and also by the Council of Europe (Council of Europe, 2017). These efforts at avoiding the term fake news were aimed at disassociating it from the meaning of news. This is because ‘disinformation’ cannot be legally defined as ‘news’ as such.