Regulatory dilemmas

The concept of hate speech within political theory, moral philosophy, and law has evolved with the emergence of human rights doctrine over the past century. Traditional societies — including today’s liberal democracies until fairly recently, as well as many contemporary illiberal regimes — protect the powerful from disparagement by the weak. The laws of seditious libel and lese-majeste, for example, are intended to preserve the special veneration that rulers claim they are due. Blasphemy law serves the same purpose for dominant religions and their clerics. The modern human rights standard, however, turns the tables on the powerful, enshrining the right to freedom of expression, including in particular the right to offend society’s most dominant individuals, institutions, and beliefs. As for hate speech, the human rights approach aims to protect the people who are most vulnerable, rather than those with the means to insulate themselves from the harms caused by speech.

International jurisprudence on hate speech is anchored in the International Covenant on Civil and Political Kights (ICCPK), the United Nations’core human rights treaty. Article 19 of the ICCPK, which establishes the right to freedom of expression, states that governments may restrict this right to protect the rights of others. Article 20 goes further, requiring states to prohibit by law the ‘advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence’. Hate speech is thus unique among the various types of disorder under the ‘misinformation’ umbrella — from voter suppression hoaxes to fake science — in that it is the only one that international human rights law requires states to prohibit.

The European Convention on Human Rights contains similar language. Racist hate speech is treated even more stringently under the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD). Article 4 of 1CERD requires states to declare as an offence punishable by law not only incitement but also ‘all dissemination of ideas based on racial superiority or hatred’. Organisations and activities engaged in racist propaganda must also be prohibited, ICERD says.

The laws of most liberal democracies are generally aligned with Article 20 of ICERD. The US is an outlier, applying a much higher threshold for state intervention in speech: most hate speech that occurs in public discourse is constitutionally protected; courts will only permit government to interfere if the speech directly incites imminent violence (Abrams 2012). When the US ratified ICCPR and ICERD, it did so with the proviso that the treaties’ Article 19 and Article 4, respectively, would not trump its own free speech guarantees. The difference in European and American approaches is a major theme in the literature on hate speech law (Hare 2009; Post 2009; Rosenfeld 2012). It has long been a source of frustration among European anti-hate groups who see neo-Nazis from their continent taking shelter in American cyberspace (Breckheimer II 2001; Posner 2014). With Europeans’ online experience now largely mediated by a handful of American platforms such as Google, Facebook, and YouTube, critics argue that the internet’s hospitability to hate is more a product of America’s idiosyncratic commercial and libertarian ethos than of universal values (van Dijck 2019).

The differences in norms among jurisdictions within the liberal West should not obscure their more fundamental similarities. They, along with international treaties, concur that laws should restrict hate speech only if it can be objectively shown to cause harm. They differ mainly in what threshold of harm to apply. But if it amounts to nothing more than subjective offence, the liberal consensus is that society’s response should take non-legal form: opinion shapers can engage in counterspeech while news media can behave ethically by choosing not to spread the offensive content, for example (Garton Ash 2016). The line that liberal democracies draw between harm and offence distinguishes them sharply from illiberal regimes, most of which have laws that punish various kinds of insult and offence. The Indian Penal Code, for example, criminalises the intentional wounding of religious feelings (Section 298). Pakistan’s notorious blasphemy law (Section 295C of its Penal Code) threatens capital punishment or life imprisonment for defiling the Prophet’s name, whether in words or images, directly or indirectly.

The stark difference in attitudes towards offensive expression is a key source of international friction in transborder cultural flows. The quintessential case is the 2005—2006 controversy over a Danish newspaper’s deliberately provocative publication of cartoons of the Prophet Mohammed (Klausen 2009). The Danish government’s refusal to offer even an expression of regret provoked the Organisation of Islamic Cooperation (OIC) to intensify’ its diplomatic efforts at the United Nations to get ‘defamation of religions’ recognised in international law as a legitimate justification for restricting freedom of expression (Langer 2014; McLaughlin 2010). Muslim governments argued that free speech was being abused to fuel Islamophobia and discrimination against Muslim minorities. Notably, several non—OIC members, such as the Philippines, Singapore and Thailand, voted in favour of a 2009 ‘defamation of religions’resolution at the General Assembly. India, Jamaica, and many other states abstained, supporting the idea of revising UN standards but arguing that this should be done on behalf of all religions, not just one. Though ultimately unsuccessful, the OIC campaign exposed an as-yet-unresolved concern that the liberal approach is enabling harmful hate propaganda to go unchecked (Appiah 2012; Hafez 2014).

Free speech theory’s harm/oftence distinction — especially the First Amendment doctrine’s extremely high harm threshold — has also been challenged from within the liberal democratic fraternity. Critics argue that the prevailing standard is lopsided in its treatment of democracy’s twin pillars, favouring liberty over equality at the expense of society’s weakest communities (Waldron 2012; Cohen-Almagor 2019; Demaske 2019; Wilson & Kiper 2020). Scholars working within the paradigm of critical race theory say that any assessment of harm must take into account prevailing structural inequalities. Expression that seems only mildly provocative or innocently humourous to members of a privileged community can trigger psychological distress and add to a hostile climate for people already struggling under the weight of historical prejudice and disadvantage (Delgado & Stefancic 2018; Matsuda 1989; Schauer 1995). Although these ‘microaggressions’ have long-term cumulative effects, they may not even be recognised by the victim at the time (Sue 2010). Other scholars continue to defend the liberal standard, arguing that it is too dangerous to give states more powers to police harms that are too hard to verify independently (Heinze 2016; Strossen 2018). Furthermore, in societies where equal rights are not already well protected, communities do not enjoy equal access to the supposed protection that offence laws provide. These laws, such as those against blasphemy, end up being used by dominant groups to punish minorities who are deemed to have caused offence through their speech and conduct (George 2016a; Marshall & Shea 2011).

While critical race theory and related challenges have had little impact on hate speech laws, they have influenced the behaviour of Western media organisations, internet intermediaries, universities, and other cultural institutions. For example, news organisations have over the decades updated their ethical guidelines against stereotyping women and minorities and on how to report extremist newsmakers. But despair at unending injustice and the rise of white nationalism have sharpened activists’ demands for greater sensitivity in speech. In the US and other liberal democracies, activists have mobilised outrage on college campuses and social media, occasionally resulting in the ‘de-platforming’ of speakers, boycotting of celebrities, and self-censorship by media (Hughes 2010; Kessler 2018). Ironically, the mobilisation of righteous indignation by progressives has parallels with the opportunistic offence-taking that many hate groups and intolerant movements have adopted alongside, or in lieu of, traditional hate speech (George 2016a). Predictably, even as it mocks the ‘political correctness’ of‘social justice warriors’, the right occasionally appropriates the cultural left’s anti-hate language and tactics against its opponents. The most sophisticated examples from recent years are the ‘anti-Semitic’ labels deployed by Israel’s right-wing government and its supporters against critics of the country’s racist policies and practices, including liberal-secular Jews (Davidson 2018). Philo et al. (2019) have exposed as an elaborate disinformation campaign the damaging allegations of anti-Semitism against the British Labour Party and its leader Jeremy Corbyn.

Hate propagandists are also adept at making censorship and regulation backfire. Social media platforms have been developing mechanisms for removing hate speech and other inappropriate content, but online operatives of repressive governments, intolerant movements, and hate groups have been gaming these systems to stifle the voices of their ideological opponents, including human rights activists, progressive cartoonists, and feminist writers. Exploiting the fact that the platforms’ reporting systems are mostly automated, malevolent actors submit complaints en masse, triggering the temporary or permanent take-down of non-harmful content or accounts. This false-positives problem is one of the key challenges facing researchers and technologists who are trying to harness artificial intelligence to detect and counter digital hate (see, for example, Carter & Kondor 2020; Di Nicola et al. 2020; Oriola & Kotzé 2020; Vidgen & Yasseri 2020). A more fundamental problem is that algorithms can only deal with individual messages containing extreme or uncivil expression, rather than large-scale online campaigns, most of whose individual messages may be unobjectionable.

 
Source
< Prev   CONTENTS   Source   Next >