Filter bubbles and digital echo chambers

Judith Möller

Are filter bubbles and echo chambers two names for the same phenomenon?

Filter bubbles and echo chambers are often named as key drivers of political polarisation and societal fragmentation (Pariser 2011; Sunstein 2001). Both concepts are based on the notion that people are excluded from information that is different from what they already believe. Very often, they refer to a partisan divide in ideological camps. The core argument here is that individuals on the political left are exclusively exposed to information that is leaning to the ideological left, and the same is true for those who identify as politically right. Inherent in the concepts is a notion of strong effects of information and technology. Both Sunstein (2001) and Pariser (2011) argue that it is due to the biased information environment that people lose sight of different perspectives and topics, which leads to increased polarisation across partisan lines, decreasing tolerance and understanding of minorities and resulting in a public debate that becomes ever more fragmented (Zuiderveen-Borgesius et al. 2016). Some commentators even explain the surprise outcomes of the 2016 presidential election and the Brexit referendum as direct results of echo chambers and filter bubbles (Groshek and Koc-Michalska 2017).

The original meaning of the term echo chamber refers to hollow enclosures used to create reverberation of sound: for example, in churches, cathedrals, and recording studios. Sunstein (2001) introduced the term to describe enclosed spaces of communication. Originally, he directly linked the concept to the presence of filtering systems online, inspired by Negropontes (1996) idea of the ‘daily me’. Yet, in the meantime, the concept has become popular to describe fragmented and isolated audiences in general. As Dubois and Blank put it, ‘The idea of an “echo chamber” in politics is a metaphorical way to describe a situation where only certain ideas, information and beliefs are shared. . . . People inside this setting will only encounter things they already agree with’ (2018: 729).

The term filter bubble was coined by internet activist Eli Pariser (2011) nearly a decade later. He describes a filter bubble as a ‘personal ecosystem of information that’s been catered by these algorithms to who they think you are’.2 Like the echo chamber, the filter bubble is defined as an enclosed space, but whereas the metaphor of the echo chamber focuses on the nature of what is inside this space, the metaphor of the filter bubbles emphasises what constitutes its boundaries: the filtering algorithms. It is important to note that both concepts describe a state rather than a process. Individuals are already excluded from challenging information, yet both arguments imply that the isolation from information that is counter-attitudinal happens gradually over time.

The conceptualisation of echo chambers and filter bubbles and their relationship is still subject to an ongoing academic debate (see, for example, Bruns 2019). However, a consensus on some of the key differentiating characteristics is slowly emerging. First, they can be distinguished by the argument of why people are excluded from counter-attitudinal information. For echo chambers the agency of the selection lies with humans: either a person is self-selecting an information diet that is a perpetual echo of their own thoughts (Stroud 2010), or the social network of an individual spreads primarily information that is in consonance with the belief system and norms of that group (Dubois and Blank 2018). In the original filter bubble argument, the agency lies with the algorithms employed to select information for online news feeds. Following the filter bubble argument, these algorithms are detecting user preferences in an opaque and unobtrusive way and subsequently offer users more of the same content. Inherit in this argument is that this a strong sense of technological determinism. It is important to note that filter bubbles can distort the perception of public opinion. The neutral architecture of online news feeds might create the impression that users see the same content as everybody else while they are really receiving a highly personalised and biased news feed (Zuiderveen-Borgesius et al. 2016).

The underlying mechanism

The reason individuals are exclusively exposed to information that does not challenge their belief system is rooted in the same social-psychological mechanisms for both concepts: selective exposure and homophily. Selective exposure theory suggests that individuals prefer to expose themselves to content that confirms their belief systems because dissonant information can cause cognitive stress they would rather avoid (Festinger 1957). This process is also often called confirmation bias and has been studied and discussed extensively in social psychology and beyond (Oswald and Grosjean 2004). Homophily suggests that we are attracted to others who are similar to us, online and offline. Shared norms, ideology, and ideas are among the most important characteristics when forming relationships (Lazarsfeld and Merton 1954). That means we are frequently talking to others who are likely to agree with us. Our like-minded friends are likely to introduce us to new information that is aligned with shared perspectives and interests. Hence, both mechanisms, selective exposure and homophily, suggest that individuals are motivated to surround themselves with information that aligns with their belief systems while avoiding dissonant information (Stroud 2010).

In fact, we saw such an alignment of political ideology and information environment in the beginning of the twentieth century. During this time, most European citizens were clearly segmented in groups like the working class, liberals, or the Catholics in their respective countries (for example, pillarisation in the Netherlands; Steininger 1977). Back then, group members were associating primarily with each other and consulting only dedicated newspapers and broadcasting stations. Over the past decades, this clear segmentation of the population has become much less pronounced. Today, partisans still prefer news outlets that are aligned with their belief systems but generally no longer avoid counter-attitudinal information (Weeks, Ksiazek, and Holbert 2016). Yet to this day, this kind of political parallelism (Hallin and Mancini 2004) can be observed in many European countries. Hence, the observation that we surround ourselves with voices that agree with us is nothing new or inherently connected to the emergence of the internet. The larger question that Sunstein (2001) in particular has put forward is whether the affordances of online communication amplify those tendencies. On the theoretical level, the two concepts, echo chambers and filter bubbles, differ in this point.

For echo chambers, the constituting affordances of online communication mainly pertain to online social networks. Sunstein argued that the possibility of engaging in online communities and easily sharing information causes increased fragmentation of the public debate as ‘unsought, unanticipated and even unwanted exposure to diverse topics, people, and ideas’ is not afforded in the ‘gated communities’ online (Sunstein 2001: 2). For the filter bubble, the technical affordances are more complex. Algorithmic filtering is a process in which content is sorted and prioritised by certain principles to optimise specific key performance indicators (KPIs). The sorting principles are often a combination of different algorithms: for example, collaborative filtering or content-based filtering (Bozdag 2013). To sort all available information to fill a personalised news feed using collaborative filtering, the recommender engine compares user signals such as past behaviour or location with other users and recommends new content these other users have engaged with. A content-based filtering algorithm identifies content in the pool that shares content characteristics with the content a user already engaged with. Both these principles imply that if a user has a clear preference, this preference is likely to be amplified to increase the likelihood that the user clicks on the content: that is, if it is true that all users prefer content that is an echo of their thoughts. The majority of empirical evidence so far, however, points in a different direction.

 
Source
< Prev   CONTENTS   Source   Next >