Extreme right and mis/disinformation
Thomas Frissen, Leen d’Haenens, and Michaël Opgenhaffen
Key concepts and a brief note on conceptual ambiguity
Before exploring how the extreme right and information disorders have mutually shaped one another, we need a clear definition of the notions of ‘extreme right’ (or far right) and ‘information disorder’. Easier said than done. As with most social and political-scientific concepts, reducing complex and multi-faceted phenomena to a clear-cut definition is a tricky operation which depends on normative and politicised perspectives. What is ‘extreme’ (or ‘fake’, as in ‘fake news’) for one scholar may be viewed as ‘alternative’ by another and as a discursive signifier by yet another. Given such a conceptual ambiguity within the current literature, anyone intending to study such concepts faces a number of epistemological, ontological, methodological, and ideological challenges. It is therefore necessary to briefly outline the perspective from which this chapter approaches the concepts of extremism and information disorders.
In this chapter the definition of extremism is approached through two premises. The first one has to do with the envisioned end goal of extremists. In the Ethica Nicomachea, Aristotle developed the conception of ‘extremes’ in the context of his virtue theory (Aristotle, Ross and Brown 2009). He saw a virtue as a character trait — of a human being or of a community — and as the perfect common middle ground between two vices: that is, extremes (Frissen 2019). In contrast to ‘virtue’, the extremes are the margins where bad qualities prevail. While people or communities at those extremes can be diametrically opposed to one another, what defines them is a rejection of the virtue; ‘the extremes reject a common middle ground’ (Leman 2016, 4). This is the first premise through which this chapter approaches the definition of extremism: a rejection of the common ground. As a consequence, extremists are best defined as those who strive for the creation of a homogenous society based on an uncompromising, uniform, dogmatic ideology that ‘tolerates no diversity’ (Schmid 2013, 10). For the purposes of this chapter, the ‘extreme right’ is a phenomenon based on various ideologies (e.g. neo-Nazi, white supremacist, xenophobic, religious, homophobic, or gender related) that always legitimise violence against specific ethnic or cultural groups.
The second premise concerns the ways in which this end goal is achieved. While political change can be achieved through a wide variety of means, extremists favour the use of force/ violence over debate and persuasion (Schmid 2013; Frissen 2019). On the basis of a historical analysis, Midlarsky (2011) argues that a willingness to kill massively for a cause or collective is what characterises all extremist groups, from fascists to communists and from separatists to nationalists. More explicitly, he states that ‘[pjolitical extremisms of all sorts share a propensity towards the mass murder of actual or potential opponents of their political programs’ (Midlarsky 2011, 8). As a result, the second premise of this chapters definition of extremism is that extremists turn to violence in the hope of arriving at a non-pluralistic, non-democratic, non-virtuous society. In the next section, we explore how the information ecosystem has helped in shaping such violence.
The production and dissemination of misleading information, myths, and propaganda are certainly nothing new. A brief peek into the twentieth century provides us with many examples, ranging from Orson Welles War of the Worlds to Joseph Goebbels’ machinery of ‘public enlightenment’. In recent years, however, these phenomena have spurred heightened scientific interest. This is a debate at whose epicentre lies the question of how to define whether information is true or false and intentional or unintentional. Scholars seem to have been mostly occupied with disentangling misinformation from disinformation. As a result, misinformation has been conceptualised as ‘publishing wrong information without meaning to be wrong or having a political purpose in communicating false information’ (Benkler, Faris and Roberts 2018, 24) while disinformation is ‘defined as manipulating and misleading people intentionally to achieve political ends’ (Benkler, Faris and Roberts 2018, 24). The locus of concern in these debates is on the issue of intentionality behind the production and circulation of false information (Farkas and Schou 2018). The problem is that this is often based on guesswork rather than well-grounded findings, leading to conceptual ambiguity in the literature.
Therefore, this chapter follows Benkler, Faris and Roberts’s (2018) concept of information disorders). This concept is much more inclusive in the sense that its definition is not only based on the piece of information as such: it also encompasses the broader role of the technological drivers and architecture in the (online) information ecosystem, such as digital platforms and social media. In other words, the concept of information disorders also refers to environmental and infrastructural phenomena such as algorithmic filtering, filter bubbles, and micro-targeting (Benkler, Faris and Roberts 2018). Furthermore, the concept of information disorder(s) is meaningful because it is not just based on the (intention of the) sender. It instead considers the outcome as the leading defining principle. Indeed, the word disorder conveys the consequences — a ‘disturbed order’ in the information ecosystem — of intentionally or unintentionally misleading pieces of information. As will become clear in the next section, instead of just capitalising on the rise of misinformation/disinformation (such as ‘fake news’or memes), it is this notion of disorder that a deeply mediatised extreme right has instrumentalised for their cause (Bennett and Livingston, 2018).