Empirical inquiry into filter bubbles and echo chambers
According to a range of empirical studies, users of online information seek out diverse information. This is associated with less rather than more political polarisation. Empirical evidence stemming from large-scale panel studies demonstrates that social networks of users online are often quite diverse, which leads to engagement with a larger variety of news sources (Beam et al. 2018; Beam, Hutchens, and Hmielowski 2018). This finding is in line with a comparative study by Fletcher and Nielsen (2018) in Italy, Australia, the UK, and the US, which finds that social media users who are incidentally exposed to news consult a wider range of sources than those who do not use social media, based on self-reports collected as part of the Reuters Digital News Survey. Recent work by Yang and colleagues (2020), based on a combination of digital tracking data and survey data collected over a five-year time frame, demonstrates that exposure to cross-cutting information even exceeds self-reported exposure to diverse sources.
These large-scale, multi-platform studies all show that the prevalence of filter bubble and echo chamber effects among the majority of the population is low. However, as most of those studies are based on self-reported media use measures, they cannot distinguish whether this is a consequence of social curation and self-selection (echo chamber) or a result of an algorithmic filter system (filter bubble). Even if they are combined with digital trace data of clicks outside social media (Yang et al. 2020), it is still unclear how diverse the news feed that was offered to an individual user was. Therefore, in the following, studies that focus on the different technical affordances of echo chambers and filter bubbles are discussed.
Echo chambers
Several studies focus specifically on the influence of network composition without an additional layer of algorithmic filtering. The ‘gated communities’ that are core to the echo chamber argument were famously illustrated by Adamic and Glance (2005) based on interlinkage analysis of political bloggers in the 2004 presidential election. They find a clear division of the red and the blue blogosphere, although both spheres remained connected. These findings were later replicated for political hashtag communities on Twitter (Williams et al. 2015; Garimella et al. 2018). The division along partisan lines online, characteristic of the US political systems, can be understood as an extension of the polarised media system offline. Combining online and offline data, Stroud showed that partisanship predicts the kind of a media diet people follow offline as well as online (2010). This finding is important to keep in mind when thinking about the potential effects of echo chambers because they illustrate two points: first, individual preferences as an expression of interests and political leaning are among the most important determinants of news selection; second, in order to assess whether individuals are locked in echo chambers and excluded from anything but the reverberance of their own thoughts, information intake needs to be studied across all sources. For example, Vaccari and colleagues (2016) find that the structure of offline discussion networks reflects their discussion networks online; those who prefer to discuss politics with those supporting their positions often do so online and offline.
It should be noted that, while it is not necessary that the members of online echo chambers have personal relationships, they matter when it comes to news selection. Anspach (2017) found in a survey experiment in a small student sample that recommendations by friends and family positively influence the likelihood of engaging with the items, even if they are counter-attitudinal.
Filter bubbles
Studying the difference between social curation and algorithmic filtering is notoriously difficult (Kitchin 2017). First, the algorithms employed by social media companies are considered trade secrets. That means the algorithms that are supposedly constructing filter bubbles are beyond reach for academic inquiry (Bruns 2019). Studying automatically scraped user timelines to accurately and reliably measure exposure to diverse information online is currently also considered a violation of the terms of service of these platforms (Walker, Mercea, and Bastos 2019). Moreover, social curation and algorithmic curation are inherently connected and part of a larger system that shapes news curation (Thorson et al. 2019). Hence, a research design that is able to combine detailed news exposure and news consumption, both online and offline, as would be required to understand whether it was an algorithm, social curation, or self-selection that potentially limited a users’ access to diverse information, is currently not possible. Having said this, there are a number of studies that can shed light on certain aspects of the process, although they are often limited to one platform or lack external validity because they are based on in vitro experiments that emulate social media news feeds.
There are a number of studies that investigated the effect of algorithmic filter systems on singular platforms: for example Facebook (Bakshy, Messing, and Adamic 2015), Google and Google news (Haim, Graefe, and Brosius 2018; Puschmann 2019; Nechushtai and Lewis 2019), or a news website (Moller et al. 2018). These studies focus on specific affordances that influence the process of algorithmic filtering. For example, Bashky and colleagues (2015) studied the effect of ideological homophily in friend networks compared to algorithmic filtering. They found that the composition of the network reduces the chance of being exposed to crosscutting content, in particular for conservatives, to a much higher degree than the algorithmic sorting. For Google searches, Haim, Graefe, and Brosius (2018) used a method of agent-based testing to assess whether variation in the signals used for algorithmic filtering affected the diversity of the results in Germany. They found that in contrast to the usage of search terms, the algorithmic selection did not reduce the diversity of the results. In line with this finding, Puschmann reported a high overlap in search results for equal search terms related to politics in donated data from 4,379 users. Nechushtai and Lewis (2019) came to a similar conclusion for Google News in the US context. Focusing on differences in specific algorithms used, we found that the diversity of algorithmic selection can be even higher than the diversity of human editorial choice for a news website in the Netherlands (Moeller et al. 2016). It should be noted, however, that it mattered how the algorithm was designed. Content-based filters did reduce the diversity in topics while collaborative filtering did not. Collectively these studies indicate that the potential of algorithmic filter systems to exclude users from information that might challenge their extant belief systems has not been realised.
However, should this change in the future, it could lead to increased polarisation. In an experiment focused on the effects of biased, personalised news feeds, Dylko and colleagues (2017) presented 93 university students with personalisation technology with varying possibilities to exercise control. They found that if participants were presented with fewer counter-attitudinal items, it resulted in less engagement with those items. If the participants customised the news feeds themselves, the selective engagement with news was still present, although less pronounced. This implies that even though algorithmic filter systems might not actively reduce diversity through the automatic selection, their user interfaces influence selective exposure.
Bubbles at the fringes
While most research indicates that the specific affordances of online communication do not contribute to the formation of echo chambers and filter bubbles among the majority of the population, several studies suggest that these affordances matter for small-scale bubbles at the fringes of the mainstream. For example, Quattrociocchi, Scala, and Sunstein (2016) studied the emergence of conspiracy-related and science-related bubbles on Facebook, focusing on 1,105 Facebook group pages in Italy and the US. They found that highly engaged users in these communities interacted primarily with those sharing their beliefs and actively sought out information in accordance with their belief systems. Smith and Graham (2019) came to a similar conclusion studying anti-vaccination networks on Facebook. They note that while the movement itself is global, the network among them is dense, and sub-networks appear to be ‘small worlds’, shielded from counter-attitudinal information.