Digital hate

Not surprisingly, most of the research into hate propaganda over the past two decades has focused on the internet. The exponential growth in online tools and spaces has helped hate groups organise, mobilise, and spread their messages to new audiences (Levin 2002; Caiani & 1’arenti 2016). The salience of digital hate and incivility means that researchers are less likely to underestimate the internet’s role than to exaggerate it. It is not obvious from most of the evidence presented whether the internet has multiplied the incidence and intensity of hate speech or just its styles and visibility. It is certainly not the case that the internet is indispensable for hate agents. Ethnographic studies of intolerant movements and case studies of their campaigns show that most are technologically promiscuous. They are ‘innovation opportunists’, adept at

‘finding openings in the latest technologies’ but not wedded to any particular medium (Daniels 2018, 62). They embrace the internet alongside other modes of communication, from radio and television talk shows to sermons in places of worship.

These caveats aside, it is clear that the internet has added several new dimensions to the communication of hate. First, there is the digital revolutions impact on the wider information ecosystem. The digital revolution has weakened traditional hierarchies that, at their best, used to maintain a common civic space and help keep toxic expression on the fringes. The decline of public service media in Europe is part of this trend (Schroeder 2019). Furthermore, the platformisation of online interactions and lowered barriers to entry mean that toxic content gets showcased alongside more trustworthy material as ‘equal residing members of the interconnected digital culture’ (Klein 2017, 12). Through this process of‘information laundering’, Klein says, ‘false information and counterfeit movements can be washed clean by a system of advantageous associations’ (2017, 26). A related concern is the normalisation or mainstreaming of extremist rhetoric both directly through social media (Govil & Baihsya 2018) and indirectly through subsequent mainstream media coverage (Phillips 2018).

Long before the arrival of‘deepfake’technologies, the ready access to internet platforms, web design templates, and innocuous domain names was making it easy disguise extreme content as mainstream. For example, the website with the address ‘martinlutherking.org’ was launched in 1999, not by supporters of the American civil rights leader but by the founder of the pioneering white supremacist site Stormfront (Daniels 2018).

Just as a hate movement can use digital technologies to bypass traditional media gatekeepers, it can also circumvent established political parties that may have moderated the impact of more extreme elements. Govil and Baihsya (2018) cite this factor as a reason for the right-wing-populist shift of India’s Hindu nationalist Bharatiya Janata Party (BJP). Digital tools helped BJP leader Narendra Modi roll out a national marketing strategy, ‘India Shining’, which reduced his reliance on the powerful local party bosses and other brokers who used to control access to grassroots vote banks. Personalised text messages and the Narendra Modi app gave citizens the sense that they had a direct and immediate connection with the leader. Thus, the authors argue, ‘digital social networking has given absolutist charismatic fascists new ways of engaging the masses’ (p. 69).

Second, both the production and consumption of hate propaganda have undergone a pathological form of démocratisation. The internet allows the creation and circulation of hate messages to be crowdsourced and encourages lone actors to take up cudgels and Kalashnikovs for the cause. One should not exaggerate the novelty of these dynamics. Nazi Germany’s propaganda did not depend only on the centralised efforts of Josef Goebbels and Julius Streicher but was also built on decades of creative work by unknown artists, writers, and small businesses that produced cheaply produced anti-Semitic stickers, for example (Enzenbach 2012). The internet did not create the (misleadingly named) phenomenon of the ‘lone-wolf’ terrorist — embedded in a networked, communal ideology but acting with a high degree of autonomy (Schuurman et al. 2019; Berntzen & Sandberg 2014). Nathuram Godse, who assassinated Mahatma Gandhi in 1948, probably fit that description (Debs 2013).

What is clear is that the new, more open digital environment brings into play many new kinds of loosely coordinated actors. Modi’s Hindu nationalist propaganda machine comprises top-level ideologues generating talking points, a professionally managed information technology (IT) cell, and perhaps 100,000 online volunteers at home and abroad, not to mention millions of other acolytes who cheer their leaders and troll critics (Govil & Baihsya 2018). When it occurs alongside violence in the real world, trolling can have a chilling effect on the target’s speech (Bradshaw & Howard 2019). Many participants may be treating such online engagement as fun entertainment, resulting in a banalisation of hate (Udupa et al. 2020). Much of this activity resembles ‘slacktivism’ — low-energy work of seemingly negligible practical impact — but its sheer volume on social media can drown out and intimidate minorities and their spokesmen. The size of the mob can be multiplied with fake online identities or ‘sockpuppets’, automated accounts, and bots (Delwiche 2019). The outsourcing of propaganda work to internet brigades has not made central coordination redundant. Political consultancies and professional public relations and marketing firms are intimately involved in designing hate campaigns (Ong & Cabanes 2019). Despite the aforementioned Bell Pottinger scandal in South Africa, these industries remain under-regulated and under-studied.

Third, data analytic capabilities give hate agents unprecedented power to go far upstream in the process of dividing society. While earlier studies of online hate focused on the digital production and dissemination of messages, the more insidious threat probably lies in the datafi-cation of subject formation. The digital advertising infrastructures built into internet platforms enable the algorithmic construction of identity groups that can be manipulated by influence campaigns. This involves mass surveillance of internet users’ behaviour patterns in fine detail, the data-driven construction of detailed profiles without their knowledge, identifying their cognitive and psychological dispositions, and then micro-targeting them with messages designed to exploit their vulnerabilities (Crain & Nadler 2019; Woolley & Howard 2016). The ‘networked subject’, argue Boler and Davis (2018, 83), is thus ‘fed personalized findings which functionally determine one’s windows on the infoworld’. Hate agents can, for example, heighten a target demographic’s sense of victimhood and vulnerability to make them more open to the scapegoating of minorities by authoritarian populists. Since it is difficult of keep track of which Facebook demographics are receiving which content and why, much of this propaganda work can be done surreptitiously. Counterspeech - liberalism’s recommended antidote for bad speech — is rendered impotent since it is not possible to counter what we don’t know is being spoken or to whom.

 
Source
< Prev   CONTENTS   Source   Next >