On algorithms and (the Achilles heel of) platforms

Besides the contents and forms of the communicative artefacts such as fake news and memes, the concept of information disorders also refers to the broader role of the ecosystem. In particular, social media platforms have been highly useful channels for the dissemination of populist messages. These platforms make it possible to have direct contact with an audience, bypassing professional news media and providing infinite possibilities to personalise a message and target specific users or groups (Ernst et al. 2017). We have already noted that mainstreaming involves the demarginalisation of extreme points of view by bringing them from the extreme poles of public discourse to the centre, thus making them negotiable (see earlier). And while one can argue that there is a kind of‘media populism’ (Kramer 2014) happening in traditional media (which mostly present the political debate in emotional terms and often frame political items as in-group versus out-group issues), the gatekeeping on social media is much more open than on more traditional media platforms, where the focus is still mainly on elite and mainstream sources (see, among others, Grabe et al. 1999; Schoemaker and Vos 2009). Even traditional news media covering news on social media platforms use a new form of gatekeeping induced by social media logic (see, e.g. Bruns and Highfield 2015; Tandoc and Vos 2016). It should therefore come as no surprise that extreme-right movements and actors have started to see social media as a rewarding and easy way to send their messages to both followers (as a form of ‘activism’) and non-followers (as a form of‘mainstreaming’). Studies in this domain have been focusing on extreme-right discourses and hate speech on social media platforms such as Facebook (e.g. Awan 2016; Ben-David and Matamoros-Fernandez 2016; Farkas, Schou and Neumayer 2018); Twitter (e.g. O’Callaghan et al. 2012; Nguyen 2016); YouTube (Ekman 2014; O’Callaghan et al. 2015); and, to a lesser extent, Reddit (e.g. Topinka 2018) and Instagram (Ichau et al. 2019) or a combination of different platforms (e.g. Matamoros-Fernandez 2017; Ernst et al. 2017). Most platforms’ terms of agreement and so-called community standards forbid hate speech, but in practice hate speech flourishes on such platforms because of the often-thin line between hate speech and free speech or humour. The platforms must achieve a difficult balancing act between wanting to be an open platform (and attract users through sensationalistic content) on the one hand and being called on to delete offensive content on the other hand (see e.g. Gillespie 2018).

As described by Macklin (2019a), events such as the Christchurch shooting highlight the Achilles heel of many of these platforms when confronted with extreme violent content. In determining whether content should be removed or not, platforms tend to rely heavily on artificial intelligence and algorithms. As a result, when interrogated in terms of their responsibilities in disseminating extreme-right materials, platforms often hide behind a narrative of solutionism, or ‘we have an algorithm for that’ (Morozov 2013). However, these algorithms have been found to be problematic in their own right. Not every message posted on social media (the input) is equally likely to be shown to the general public (the output) since both editorial (by content moderators) and algorithmic filtering take place between input and output (Diakopoulos 2015; Napoli 2014; Wallace 2018). Poell and Van Dijck (2014) indicate how this selection is anything but neutral. They argue that platforms have a strong preference for breaking news and news in line with the users’ prior search and click behaviour. As far as breaking news is concerned, they claim that items or hashtags that generate a sudden, steep peak in the volume of tweets are more likely to be selected as trending topics than items that may generate a larger total volume but for which there is no clear peak. And this focus on a peak may favour spectacular, sensational, and bizarre news over complex, nuanced, but socially relevant news. This is in line with previous research indicating that the algorithms give toxic messages extra attention (Massanari 2017). These insights can teach us how sensationalistic news but also fake news and extreme partisan messages can reach millions almost instantaneously — as epitomised by the success of #pizzagate, the hashtag that accompanied messages about an alleged paedophilia network headed by Hillary Clinton that was initially launched by a troll account that mainly tweeted pro-Nazi content (Metaxas and Finn 2019).

This logic of algorithmic filtering and entanglement with political actors could lead to the notorious filter bubble (see e.g. 1’ariser 2011) and ‘information cocoons’ or echo chambers (see, e.g. Sunstein 2007; Jamieson and Cappella 2008). In short, a filter bubble is the result of not being able to see deviating sources and content (due to the algorithms’ filtering), and an echo chamber describes a virtual ‘self-protective enclave’ in which extreme right-wing users, for example, only consume sources and content that repeat their own thoughts over and over again and confirm their already-internalised convictions (Jamieson and Cappella 2008). There are a number of studies indicating the existence of filter bubbles and echo chambers (see, e.g. Barbera 2015; Colleoni et al. 2014; Walter et al. 2018; Sunstein 2004), but more and more studies indicate that the existence of this kind of insulate, virtual spaces in which users only come into contact with like-minded people and content must be nuanced (Flaxman et al. 2016; Zuider-veen Borgesius et al. 2016). Bruns (2019) even argued that we should abandon the dogma that platforms by definition lead to filter bubbles and echo chambers, which is probably triggered by a form of ‘moral panic’, and that it is more valuable to study what people do with the news and information once they are confronted with it. Consequently, more research is required.

 
Source
< Prev   CONTENTS   Source   Next >