Audiences, goals, and tactics

Audiences are the different groups who are exposed to hate speech and counterspeech messages but with clear differences in their relationship to these messages. Understanding the differences between audiences is critical for the effectiveness of counterspeech in achieving its goals. In general, counterspeech aims to reduce the likelihood of audiences accepting and spreading hate speech and increase the willingness of audiences to challenge and speak out against such speech (Brown 2016). In this section, four different audiences and associated counterspeech goals are considered. The audiences examined are hate groups, violent extremists, the vulnerable, and the public. Individuals can potentially transition between these groups as a result of exposure to both hate speech and counterspeech, although such shifts often involve other factors beyond message exposure.

While those creating and sharing hateful messages are likely to hold a variety of motivations for their actions, a core goal of such groups is often to grow and strengthen their in-group. The first audience that can be distinguished in this analysis, therefore, is hate groups. In the United States, the Southern Poverty Law Center (n.d.) defines hate groups as organisations with ‘beliefs and practices that attack or malign an entire class of people, typically for their immutable characteristics’. Hate groups are often voluntary social groups that vary in size, strength, and organisation. While some prominent groups, such as the Ku Klux Klan, have had organised and formal structures with thousands of members (at their peak), many other hate groups are diffuse and informally organised around a common ideology. Many white supremacist hate groups in recent years, for example, can be characterised as leaderless, without formal structure, and loosely organised around a common ideology (Berger 2019; Allen 2020). In such cases, there is often no official membership, and other terms indicating affiliation, such as supporters and followers, may be more accurate. The central goal of counterspeakers and counterspeech is to reduce the size and strength of hate groups collectively and to shift the discourse and ultimately the beliefs of the individuals producing hate speech (Benesch 2014).

The internet and social media create affordances to connect individuals with similar grievances to organise much more efficiently, leading many to link the growth in hate groups to the new media ecology'. Furthermore, as digital media is increasingly adopted earlier in life and consumes a greater proportion of one’s time, messages received in the digital media ecosystem can begin to penetrate young people at an early age when their core beliefs and identities are forming. Within this context, hate groups use a variety of forums to reach disaffected youth and offer them kinship and a sense of belonging they may lack (Kamenetz 2018).

In recent decades, hateful ideology and state power have merged at different times and places to implement discriminatory and violent policies against perceived enemies, leading to mass atrocities and even genocide in worst-case scenarios. Hateful policy against the Tutsi and moderate Hutu in Rwanda in 1994 and against the Rohingya in Myanmar in 2017, for example, led to the extermination of 800,000 in the former case and the ethnic cleansing of 700,000 in the latter. But even outside official power, hate groups often have members willing to operationalise their beliefs with acts of violence against target groups. Hate-driven massacres in 2019 against Muslims in Christchurch, New Zealand, and against Latinos in El Paso, Texas (United States), were, for example, conducted by ‘lone wolves’ who carried out hate-induced acts of terror. This leads to the second audience — violent extremists — who are often sub-groups within hate groups willing to carry out violence in the name of their cause. What turns or radicalises a member or supporter of a hate group to turn into a violent extremist is a matter of much concern and research. A key goal of counterspeech is to understand these triggers and use communication to offset them as much as possible so that hate does not turn into violence.

The final group in this analysis includes everyone else who may inadvertently or intentionally come across hateful rhetoric online. This audience is called the public in this chapter. For this analysis, we exclude members of the group targeted by hate speech and counterspeakers as part of the public and instead only focus on third-party individuals who are otherwise unaffiliated with any group or audience already mentioned. The public will usually be much larger than any of the other audiences. The goal of counterspeakers is to get this audience to pay attention and ideally to engage in constructive interventions as a civic duty to assist counterspeakers in their goals. It is important to note that hate groups are not just a random set of individuals but often claim to represent a larger community with common grievances. These commonalities can be based on immutable factors like the groups targeted for hate speech, creating an ‘us versus them’ dynamic. By highlighting common grievances and identifying targets to blame, hate groups hope to attract support from the larger group they claim to represent. The third audience, therefore, are referred to as the vulnerable — those who have immutable similarities with hate groups and are potential future supporters. Counterspeech aims to prevent this vulnerable audience from joining or supporting hate groups through a number of different messaging strategies that try and limit them to fringe movement within the larger group they hope to represent. When discussing tactics, this chapter refers to the types of observations in which researchers or practitioners are intentionally attempting to employ counterspeech to understand its effects and prevent harm from hate speech, respectively.

To reach these audiences online, Wright et al. suggest four tactics or vectors of counterspeech: one-to-one, in which a single counterspeaker converses with another person sharing hateful messages; one-to-many, in which an individual counterspeaker reaches out to a group that is using a particular hateful term or phrase through, for example, using a hateful hashtag; many-to-one, in which many respond to a particular hateful message that may have gone viral; and finally, many-to-many, in which a conversation involving many breaks out, often over a timely or controversial topic (2017). One example of many-on-many involved the hashtag #KillAllMuslims, which trended on Twitter but was then taken over by counterspeakers who reacted on mass to challenge it, with one particular countermessage shared over 10,000 times (Wright et al. 2017).

When examining counterspeech research, it is important to distinguish amongst the different means by which such activity is observed and understood and the degree of intervention. At the one end of the spectrum are naturalist studies that involve no direct intervention. In these studies, researchers observe organic conversations between hateful speakers and counterspeakers (neither of whom may identify themselves by these labels) through gathering data on real conversations from social media feeds. At the other end of the spectrum are full experimental research studies in which one or both sides are recruited and observed communicating in an artificial environment. However, between these spaces, activists and NGOs sometimes engage in coordinated counterspeech interventions to try and reduce the perceived negative impacts of hate speech. One notable example of this type of work is the activist group #ichbinhier and #jagarhiir (German and Swedish for #Iamhere). This group, which operates in a number of countries, engage in a series of activities in this regard, including counterarguing against hateful posts (Porten-Chee 2020).

The tactical choices for reaching hateful speakers online depend to some degree on which audiences counterspeakers want to influence. In this regard, there are at least three choices. If the goal is to reach hate group members and potential violent extremists in order to change their views and behavior, going into the ‘hornets nest’ and finding the spaces where they congregate is an obvious option. This can involve particular website such as 8kun (formerly 8chan) or hate groups on social media platforms, although many have been removed based on recently updated community standard guidelines (Facebook, n.d.). The second tactic involves going to mainstream news websites and social media pages and monitoring these prominent locations for hateful comments, with the goal of catching them early and limiting influence and possible recruitment of vulnerable audiences and the broader public. The third approach involves countering event-driven hate, which can surge during planned events such as elections or unexpectedly after high-profile crimes or acts of terrorism. In such scenarios, emotions can be particularly elevated as the political stakes are high, so hateful rhetoric can spread rapidly amongst not just hate groups but also vulnerable audiences who might share common grievances or prejudices. In such scenarios, counterspeakers can play a calming role and try and prevent hateful rhetoric from turning into violence. This can involve countering hateful hashtags as they emerge following critical events (Wright et al. 2017) or sending preventive ‘peace text’ messages promoting calm during key events or responding to false rumours. These latter activities were employed by the NGO Sisi Ni Amani Kenya during the 2013 Kenyan elections to try and prevent a repeat of the violence that marred the 2007—2008 election (Shah and Brown 2014).

< Prev   CONTENTS   Source   Next >