Countering hate speech
Babak Bahador
Counterspeech refers to communication that responds to hate speech in order to reduce it and negate its harmful potential effects. This chapter begins by defining hate speech and examining some of its negative impacts. It then defines and disaggregates the concept of counterspeech, differentiating five of its dimensions — audiences, goals, tactics, messages, and effects. This is presented in two sections. The first examines audiences, goals, and tactics. Audiences refers to the groups exposed to counterspeech, including hate groups, violent extremists, the vulnerable, and the public. Goals are the aims of those engaging in counterspeech efforts, which often vary by audience. Tactics assesses the different means and mediums used to reach these audiences. The second section examines messaging and effects. Messaging refers to the content typologies used to try and influence audiences. Effects analyses how the audiences exposed to counterspeech are influenced based on a review of recent studies in which the approach has been tested. In this section, five key findings from the counterspeech research literature are presented.
Defining hate speech
In its narrow definition, hate speech is based on the assumption that the emotion of hate can be triggered or increased towards certain targets through exposure to particular types of information. The emotion of hate involves an enduring dislike, loss of empathy, and possible desire for harm against those targets (Waltman and Mattheis 2017). Hate speech, however, is usually defined more broadly to include any speech that insults, discriminates, or incites violence against groups that hold immutable commonalities such as a particular ethnicity, nationality, religion, gender, age bracket, or sexual orientation. While hate speech refers to the expression of thoughts in spoken words, it can be over any form of communication, including text, images, videos, and even gestures. The term hate speech is widely used today in legal, political, and popular discourse. However, it has been criticised for its connection to the human emotion of hate and other conceptual ambiguities (Gagliardone et al. 2015; Howard, 2019). This has led to proposals for more precise terms such as dangerous (Benesch 2014; Brown 2016), fear (Buyse 2014) and ignorant speech (Lepoutre 2019).
While jurisdictions differ in the way they define and attempt to remedy the negative consequences of hate speech (Howard 2019), there is some international consensus through institutions such as the United Nations about what constitutes hate speech. According to Article 20 of the International Covenant on Civil and Political Rights (United Nations n.d.), *[a]ny advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law’. Other international legal instruments that address hate speech include the Genocide Convention on the Prevention and Punishment of the Crime of Genocide (1951), the International Convention on the Elimination of All Forms of Racial Discrimination or ICERD (1969) and the Convention on the Elimination of All Forms of Discrimination Against Women or CEDAW (1981).
While these international agreements are important representations of broadly recognised principles on the topic, in practice, much of hate speech today occurs on private social media platforms such as Facebook, Twitter, and Google. It can be argued that, as a result, a new type of jurisdiction has emerged in which the policies or ‘community standards’ of these organisations ultimately set the boundaries between free speech and hate speech (Gagliardone et al. 2015). These policies are ever evolving and are largely modelled on the same core principles enshrined in more traditional national laws and international treaties, with sanctions ranging from flagging hateful posts to removing such posts and closing associated accounts. However, many criticise the notion of relying on profit-making corporations to set such policies for a myriad of reasons including their slow response, inconsistent policy application, inappropriate enforcements (e.g. banning journalists or activists), and favouritism towards the powerful (Laub 2019).
Research on media effects and persuasion demonstrate that hateful messages are likely to have different effects on different members of in-group audiences, often determined by their predispositions. Hate speech, therefore, should be understood as speech that only has the potential to increase hate. Even when hate increases, however, other moral, cultural, political, and legal inhibitions can prevent hateful views from manifesting into behavioural responses such as violence and crime. Factors that can increase the risk of speech leading to violence include the speaker’s influence, audience susceptibility, the medium, and the social/historical context (Benesch 2013; Brown 2016).
While hate speech can target individuals, it is much more concerning when groups are targeted. This is because, in such scenarios, all members of the group can become ‘guilty by association’ and the focus on collective blame and vicarious retribution, even if few were responsible for the purported negative actions (Lickel et al. 2006; Bahador 2012; Bruneau 2018). Hate speech targeting groups is almost always a form of disinformation because rarely is an entire group guilty of the negative actions and characteristics allocated to them. In the vast majority of cases, such allegations are either outright false or exaggerated or conflate the actions of a minority associated with the group to the entire group.
Hate speech is often used by populist leaders and politicians to shift blame for real or perceived societal problems or threats on domestic minorities within societies who have historically been the victims of prejudice and past hate. While one aspect of hate speech involves calls to dehumanise and demonise such groups, another involves incitement to exclude, discriminate, and commit violence against such groups as a solution to overcoming the social problems and threats. This type of speech is also used between nation-states and is a well-known precursor to international conflict to prepare and socially mobilise societies for war and the normalisation of mass violence (Dower 1986; Keen 1991; Carruthers 2011).
To offset the potential negative impacts of hate speech, governments, technology companies, non- and intergovernmental organisations, and experts have often proposed content removal (or take-downs) and other forms of punishment. While there is much criticism of the interpretation and implementation of such sanctions, as mentioned, there is a larger ethical issue at stake as such actions violate the human right to free speech, which is widely considered to be fundamental to a properly functioning democracy. Free speech is protected in the
Universal Declaration of Human Kights and within the constitutions of a number of countries, including the First Amendment to the United States Constitution. Within a US context, almost all speech is protected, with limitations only in very rare cases when ‘imminent lawless action’ is advocated (Tucker 2015). Furthermore, there is also the unintended risk of the Streisand effect, in which attempts to hide or censor something inadvertently increase its publicity (Lepoutre 2019).