European Union declarations
The European Union has been primarily an economic and, later, a political organization. The application and active promotion of fundamental rights and democracy has not been the Union’s priority until recently. Still, various organs of the EU have clearly stood for democracy by observing dangerous totalitarian tendencies among its member states: the spread of violence, racism or hatred, and more recently the spread of hateful propaganda both online and offline. In this regard, among the most relevant texts concerning online hate speech regulationis the initiative launched by the EU that involves four dominant IT companies. Also, the EU’s Fundamental Rights Agency (FRA) has played a key role in promoting common guidelines on a wide range of human rights issues including hate speech and hate crime.
The Code of Conduct on countering illegal hate speech online
The Code of Conduct is a self-regulator}’ document initiated by the EU Commissioner for Justice, Consumers and Gender Equality, Vera Jourova. It was adopted in May 2016 by four IT companies: Facebook, Microsoft, Twitter and YouTube which are all based in the United States but operate on European soil. The companies are committed to “removjing] illegal hate speech in less than 24 hours and removing] or disabling] access to such content.”[1] The Code uses the term ‘illegal hate speech’ without explicitly defining it. At the same time, it claims its definition to be based on the Framework Decision 2008/913/JHA of November 28, 2008 on combating certain forms and expressions of racism and xenophobia already analyzed in Chapter 5. In this way, it applies the EU definition of offline hate speech. Moreover, it emphasizes that it is also enforced in an online environment.
The Code of Conduct is narrowly targeted to achieve one objective: to remove hate speech. The Code does not recommend further actions for tackling the phenomenon, as a result of the online hate speech definition’s alignment with that of the offline one. However, as already mentioned, the regulation of cyberhate presents new challenges. Bakalis argued that the harm caused by online hatred differs from harm known in offline settings, because the “principle of equality as competing value must be given greater attention in order to determine the correct limits of legal regulation.” Undoubtedly, the Code of Conduct presents an important step towards the elimination of hatred from the Internet. Cyberhate has not only been used to attack individuals from vulnerable groups but also to recruit extremists to cany out terrorist attacks in Europe and in the Middle-East. In such circumstances, eliminating hateful content from virtual reality is vital to maintain peace and public order. The Code of Conduct does not appoint an authority in cases where companies do not comply with its rules. Still, a national judiciary can ask for a remedy, as in a current case of suing Facebook for its failure to delete Neo-Nazi posts in Germany.[2]
The implementation of the Code of Conduct has already been evaluated three times. In 2018, the NGOs and public bodies participating in the evaluation found that, on average, IT companies removed 70% of all the illegal hate speech, which is a better outcome than during previous monitoring effort. The second major result of the exercise was that IT companies reviewed the majority of notifications within 24 hours. Among the remaining challenges were cited: lack of feedback to users, and prosecution of illegal hate speech offenses which needs to be done with the cooperation of national police or prosecutors.
In regard to online and offline enjoyment of freedom of expression, the Council of the European Union adopted guidelines in 2014. An undifferentiated approach is applied for the real and virtual enjoyment of human rights. It was affirmed that “all human rights that exist offline must also be protected online.” This is a consensual approach across all international intergovernmental organizations: existing human rights law applies to all realities. The limits of free speech must follow the distinction between the serious incitement to extremism and the right to express views that “offend, shock, or disturb.”
- [1] European Commission, Code of Conduct on Countering Illegal Hate Speech Online (May 2016), 2. 2 See Chapter 5, Section 3. 3 Chara Bakalis, “Regulating Hate Crime in the Digital Age,” in The Globalization of Hate: Internationalizing Hate Crime?, ed. Jennifer Schweppe and Mark Austin Walters, 1st ed (Oxford, United Kingdom: Oxford University Press, 2016), 263-76. 4 Imran Awan, “Cyber Threats and Cyber Terrorism: The Internet as a Tool for Extremism,” in Policing Cyber Hate, Cyber Threats and Cyber Terrorism, eds. Brian Blakemore and Imran Awan (New York: Routledge, 2016), 21-33.
- [2] Jan Fleischhauer, “Staatsanwälte ermitteln gegen Mark Zuckerberg”, Spiegel Online, November 4, 2016, http://www.spicgel.de/netzwelt/web/facebook-staatsanwaltscha ft-ermittelt-gegen-mark-zuckerberg-a-1119746.html. 2 “Countering illegal hate speech online - Commission initiative shows continued improvement, further platforms join,” European Commission, January 19, 2018, https://ec.euro pa.eu/commission/commissioners/2014-2019/ansip/announcements/countering-ill egal-hate-speech-online-commission-initiative-shows-continued-improvement-further_en. 3 Council of the European Union, Foreign Affairs Council Meeting, EU Human Rights Guidelines on Freedom of Expression Online and Offline. 2014, accessed February 23,2015, https://eeas.europa.eu/sites/eeas/files/eu_human_rights_guidelines_on_freedom_of _expression_online_and_offline_en.pdf. 4 Ibid., para. 6. 5 Ibid., Annex, 17. 6 European Union Agency for Fundamental Rights, Fundamental Rights Report 2016 (Luxembourg: Publications Office of the European Union, 2016), 77 et seq. 7 See previous European Union Agency' for Fundamental Rights Annual Reports from 2008 to 2014.