Filter bubble

Axel Bruns, Digital Media Research Centre, Queensland University of Technology, Brisbane, Australia, a.bruns@qut.edu.au

PUBLISHED ON: 29 Nov 2019 DOI: 10.14763/2019.4.1426

Abstract

Introduced by tech entrepreneur and activist Eli Pariser in 2011, the ‘filter bubble’ is a persistent concept which suggests that search engines and social media, together with their recommendation and personalisation algorithms, are centrally culpable for the societal and ideological polarisation experienced in many countries: we no longer encounter a balanced and healthy information diet, but only see information that targets our established interests and reinforces our existing worldviews. Filter bubbles are seen as critical enablers of Brexit, Trump, Bolsonaro, and other populist political phenomena, and search and social media companies have been criticised for failing to prevent their development. Yet, there is scant empirical evidence for their existence, or for the related concept of ‘echo chambers’: indeed, search and social media users generally appear to encounter a highly centrist media diet that is, if anything, more diverse than that of non-users. However, the persistent use of these concepts in mainstream media and political debates has now created its own discursive reality that continues to impact materially on societal institutions, media and communication platforms, and ordinary users themselves. This article provides a critical review of the ‘filter bubble’ idea, and concludes that its persistence has served only to redirect scholarly attention from far more critical areas of enquiry.
Citation & publishing information
Received: April 27, 2019 Reviewed: September 20, 2019 Published: November 29, 2019
Licence: Creative Commons Attribution 3.0 Germany
Funding: This research is supported by the Australian Research Council Future Fellowship project Understanding Intermedia Information Flows in the Australian Online Public Sphere, Discovery project Journalism beyond the Crisis: Emerging Forms, Practices and Uses, and LIEF project TrISMA: Tracking Infrastructure for Social Media in Australia.
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Filter bubble, Echo chambers, Social media, Social network, Polarisation
Citation: Bruns, A. (2019). Filter bubble. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1426

This article belongs to Concepts of the digital society, a special section of Internet Policy Review guest-edited by Christian Katzenbach and Thomas Christian Bächle.

Introduction

In its contemporary meaning, the term ‘filter bubble’ was introduced and popularised by the US tech entrepreneur and activist Eli Pariser, most significantly in his 2011 book The Filter Bubble: What the Internet Is Hiding from You. Pariser opens the book with an anecdote:

in the spring of 2010, while the remains of the Deepwater Horizon oil rig were spewing crude oil into the Gulf of Mexico, I asked two friends to search for the term “BP.” They’re pretty similar – educated white left-leaning women who live in the Northeast. But the results they saw were quite different. One of my friends saw investment information about BP. The other saw news. For one, the first page of results contained links about the oil spill; for the other, there was nothing about it except for a promotional ad from BP. (Pariser, 2011a, p. 2)

Pariser goes on to speculate that such differences are due to the algorithmic personalisation of search results that Google and similar search engines promise, and that in effect each search engine user exists in a filter bubble – a “personalized universe of information” (Pariser, 2015, n.p.) – that differs from individual to individual. This idea can therefore be seen as a far more critical counterpoint to Nicholas Negroponte’s much earlier vision of the Daily Me (Negroponte, 1995), a personalised online newspaper that would cater to its reader’s specific interests rather than merely providing general-interest news and information. Such differences in the evaluation of otherwise similar concepts also demonstrate the considerably changed public perception of online services and their algorithmic shaping, from the early-Web enthusiasm of the mid-1990s to the growing technology scepticism of the 2010s.

It is important to note from the outset that, in writing for a general audience and promoting his concept through TED talks and similar venues (e.g., Pariser, 2011b), Pariser largely fails to provide a clear definition for the ‘filter bubble’ concept; it remains vague and founded in anecdotes. Subsequently, this has generated significant problems for scholarly research that has sought to empirically verify the widespread existence of filter bubbles in real-life contexts, beyond anecdotal observations. This definitional blur at the heart of the concept did not prevent it from gaining considerable currency in scientific as well as mainstream societal discourse, however: in his farewell speech, even outgoing US President Barack Obama warned that “for too many of us it’s become safer to retreat into our own bubbles” (Obama, 2017, n.p.) rather than engage with divergent perspectives. Politicians, journalists, activists, and other societal groups are now frequently accused of ‘living in a filter bubble’ that prevents them from seeing the concerns of others.

However, in such recent discussions the term ‘filter bubble’ is no longer primarily applied to search results, as it was in Pariser’s original conceptualisation; today, filter bubbles are more frequently envisaged as disruptions to information flows in online and especially social media. Pariser himself has made this transition from search engines to social media in his more recent writing, while continuing to point especially to the role of algorithms in creating such filter bubbles: he suggests, for instance, that “the Facebook news feed algorithm in particular will tend to amplify news that your political compadres favor" (Pariser, 2015, n.p.). This shift in the assumed locus of filter bubble mechanisms also points to the growing importance of social media as the primary sources for news and other information, of course – a change that longitudinal studies such as the Digital News Report (e.g., Newman et al., 2016, 10) have documented clearly.

In this new, social media-focussed incarnation, the ‘filter bubble’ concept has become more and more interwoven with the related but earlier concept of ‘echo chambers’, unfortunately. Indeed, a substantial number of mainstream media discussions, but also many scholarly articles, now use the two terms essentially interchangeably, in formulations like “filter bubbles (aka ‘echo chambers’)” (Orellana-Rodriguez & Keane, 2018). A clear distinction between the two terms is complicated by the fact that – like Pariser for ‘filter bubble’ – the originator of the ‘echo chamber’ concept, the legal scholar Cass Sunstein, never clearly defines the latter term either. As a result, just like ‘filter bubble’, the term ‘echo chamber’ has been applied to various online contexts, ranging from ‘the Internet’ in the abstract to specific social media spaces, since it first appeared in Sunstein’s 2001 book on the concept. This terminological confusion – about the exact definitions of either term in itself, and about their interrelationship with each other – has significantly hindered our ability to test them through rigorous research.

(Inter-)disciplinary perspectives on filter bubbles

Indeed, the most fundamental problem that emerges from the profound lack of robust definitions for these terms is the fact that empirical studies exploring the existence and impact of filter bubbles (and echo chambers) in any context have generally been forced to introduce their own definitions, which reduces their comparability: a study that claims to have found clear evidence for filter bubbles might have utilised a very different definition from another study that found the precise opposite. Attempts have been made in recent years to develop more systematic and empirically verifiable definitions for either term (e.g., O’Hara & Stevens, 2015; Zuiderveen Borgesius et al., 2016; Dubois & Blank, 2018), and to re-evaluate extant studies against such definitions (e.g., Bruns, 2019), but there is a need for considerably more progress in this effort.

Today, much of the scholarly focus in the investigation of these concepts is on social media; in part, this is because in spite of Pariser’s founding anecdote there is a severe lack of convincing evidence for the existence of significant filter bubbles in search results. In 2018 alone, three major studies showed substantial overlaps in the search results seen by users of different political and other interests: Nechushtai and Lewis reported that Google News searchers in the US were directed overwhelmingly to “longstanding national newspapers and television networks” (2018, p. 302); Haim et al. found “only minor effects of personalization on content diversity” for similar news searchers in Germany (2018, p. 339); and Krafft et al. build on their findings for both Google Search and Google News to explicitly “deny the algorithmically based development and solidification of isolating filter bubbles” (Krafft et al., 2018, p. 53; my translation). Notably, these results stem from studies that were conducted in different countries, at very different scales, and utilised a variety of methods. While further research should confirm these results for a greater number of national contexts and a broader range of search engines and news portals, at least for search, it seems that the filter bubble idea has deflated: far from the vision (or threat) of an individually unique Daily Me, personalisation in general and news-specific search results still appears to be exceptionally limited.

For social media, on the other hand, the debate about filter bubbles and echo chambers continues, with various studies both supporting and denying their existence. Here, the definitional confusion is most acutely felt, and also manifests very differently across the different disciplines that are involved in testing these concepts. For the purpose of the following discussion, and in order to introduce a meaningful distinction between the ‘echo chamber’ and ‘filter bubble’ concepts, we might employ the following minimal definitions (cf. Bruns, 2019, p. 29):

  • echo chamber: emerges when a group of participants choose to preferentially connect with each other, to the exclusion of outsiders (e.g., by friending on Facebook, following on Twitter, etc.)
  • filter bubble: emerges when a group of participants choose to preferentially communicate with each other, to the exclusion of outsiders (e.g., by comments on Facebook, @mentions on Twitter, etc.)

The effects of such connective and communicative structures would be further heightened if such echo chambers and filter bubbles overlapped each other closely: that is, when users who only follow each other also choose only to communicate with each other. This is not necessarily guaranteed: Twitter users may @mention others whom they do not follow, for instance, and Facebook users may encounter others with whom they are not friends in the comment pages of Facebook pages and groups.

Notably, especially in comparison with earlier concerns about the fragmentation of national mediaspheres as a result of the multiplication of cable channels or online information sources, or about social stratification due to diverging media literacies amongst different parts of the population, the echo chamber and filter bubble concepts clearly centre on the individual media user and their treatment by search engines and social media platforms. While such earlier concerns tackled aggregate, population-wide trends, therefore, these new phenomena are driven by individualised personalisation processes (and possible user interventions in such personalisation).

Network science

As these definitions already foreshadow, one key approach to the study of filter bubbles is the (usually computational) analysis and visualisation of the network structures between social media participants. At the risk of oversimplification, such network science approaches mainly tend to look for evidence of homophily: a preference for interconnections between participants with similar interests, views, and ideologies. This, in turn, may also lead to selective exposure, as members of such homophilous communities are expected to preferentially circulate content that matches their worldviews (e.g., Batorski & Grzywińska, 2018).

The definitions suggested here also enable an assessment of the degree of preferential attachment to like-minded others: put simply, they enable an evaluation of the balance between in-group and out-group connections and communication. Here, in light of the considerable disconnective and disruptive effects that their proponents ascribe to filter bubbles and echo chambers, it is clear that such an evaluation would need to show considerably more than only a mild tendency towards homophily in users’ connection and communication choices. In order to result in a notable divergence in the information diets experienced by members and non-members of an echo chamber or filter bubble, they would actively have to both seek out engagement with like-minded others (in other words, pursue selective exposure), and stay away from those who might introduce them to alternative views (that is, practice selective avoidance).

On most social media platforms, basic homophilous tendencies are indeed very easy to find: in the context of the current political climate in the United States, for instance, it is unsurprising that Twitter hashtags such as #MAGA or #resist attract highly homophilous and diametrically opposed participant groups, for instance (Chong, 2018); similarly, it is to be expected that Facebook pages and groups with an explicit anti-vaccination agenda will mainly attract participants that share this agenda (Smith & Graham, 2017). From this perspective, social media platforms and their respective affordances (such as Twitter hashtags, or Facebook pages and groups) can be understood as engines of homophily – and indeed, since dial-in bulletin board systems and the early Usenet first became popular, it has been this very opportunity to connect with like-minded others that has attracted users to computer-mediated communication platforms (e.g., Baym, 2000).

Notably, there are a great many contexts where such homophily can be understood as beneficial: for instance as it allows communities of interest to connect online in spite of considerable geographical distance, as it enables groups of participants with special interests to share relevant information with each other, or as it enables the members of vulnerable minorities in society to provide mutual support to one another (cf. Helberger, 2019). Is homophily alone a sufficient criterion for communicative dysfunction, then – should we now reclassify any such communities of interest in online environments as filter bubbles? After all, while politically hyperpartisan hashtags or conspiracy theory pages may attract a fairly homogenous community of participants, these particular online spaces do not exist in isolation, but are embedded into a much more complex and varied social media platform, which in turn forms only one component of a rich and diverse media ecology (cf. Dubois & Blank, 2018). As a result, network science studies that look beyond such inherently ideological communities find significantly less homophily and polarisation in non-political contexts, and also detect considerable cross-connection between the groups that populate those ideological spaces.

For instance, a network analysis commissioned by newspaper Süddeutsche Zeitung before the 2017 German federal election found that, for all their political differences, the followers of the major parties’ Facebook pages still shared many non-political interests, and would encounter each other on the pages relating to those interests (ranging from news through entertainment to sports) – only the Facebook followers of the extreme-right AfD party diverged significantly from this pattern by concentrating predominantly on anti-migrant topics. The report concluded that “apparently closed filter bubbles do not exist in large parts of the political spectrum in Germany. … Users from different party-political milieux often encounter the same posts” (Rietzschel, 2017, n.p.; my translation) – and while the AfD’s departure from this societal mainstream is concerning in its own right, it shows that (if at all) filter bubbles exist only at the very extremes of the ideological spectrum.

For Twitter, comprehensive maps of follower connections in the Australian (Bruns et al., 2017) and Norwegian Twitterspheres (Bruns & Enli, 2018) have similarly shown the existence of interest-driven clusters of dense interconnection around topics from politics to sports, but point to few significant disconnects across the overall network. Where clusters have deliberately detached from the wider network, this is due to their significant topical divergence (porn) or enhanced need to maintain a strictly professional network (education); otherwise, the network structures of both Twitterspheres facilitate a largely unencumbered flow of information across the entire user base and would therefore be unable to support the existence of filter bubbles.

Social science

Such studies point to the fact that the multifaceted, multi-interest nature of mainstream social media platforms actively militates against the formation of echo chambers and filter bubbles. Many users are not participating in Facebook, Twitter, and other leading platforms only because of a narrow political agenda or interest, but use these platforms to pursue a multiplicity of divergent and sometimes contradictory interests, connecting in the process with various groups and communities that may overlap to a greater or lesser degree. Such processes of accidental or deliberate overlap are now well established under the term ‘context collapse’ (Marwick & boyd, 2011) – and as much as we might see Facebook groups and Twitter hashtags in themselves as engines of homophily, we must also regard these platforms in their entirety as engines of context collapse.

Such context collapse is especially prone to occur at the point where a user’s different communities of interest, and the networks of contacts that they represent, are most likely to intersect: in the personal public (Schmidt, 2014) that surrounds the user’s social media profile. In part because ethical and practical considerations about access to data on private and semi-private interactions at the profile level prevent large-scale network science studies, such tendencies towards context collapse have been observed predominantly through the use of social science methods from ethnographic observation through interviews to surveys – and this research often presents substantial challenges to the filter bubble and echo chamber concepts.

A representative Pew Center survey of US social media users before the 2016 presidential election reported, for example, that only “23% of Facebook users and 17% of Twitter users say [that] most of the people in their networks hold political beliefs similar to theirs” (Duggan & Smith, 2016, p. 9). The same survey also noted that more than one quarter of respondents had “blocked or unfriended” contacts because of unwanted political content (2016, p. 4). Such attempts to disconnect might be seen as efforts to build a personal filter bubble by this minority of users; that they are necessary, however, and that overall users are “worn out by the amount of political content they encounter” (2016, p. 2), clearly shows that to date any manual or algorithmic attempts to reduce the heterogeneity of personal networks on Facebook and Twitter have failed.

In passing, this undermines Pariser’s suggestion that Facebook algorithm “will tend to amplify news that your political compadres favor" (2015, n.p.), as we had encountered it previously: if our Facebook networks are inherently heterogeneous – for political as well as for other interests – then how would the algorithm be able to detect which of these connections are our ‘compadres’, and privilege their ideologies? Yes, such a selection might be possible for users who are on Facebook purely for politics, but it is patently obvious that the vast majority of Facebook participants merely endure rather than actively enjoy political discussions (Duggan & Smith, 2016). Indeed, arguably it is a fundamental flaw of both the ‘filter bubble’ and ‘echo chamber’ concepts that they are championed by authors like Pariser and Sunstein who genuinely are deeply engaged in political debates, but who fail to recognise that their experience of online and social media therefore diverges considerably from social media users who are not ‘political junkies’ (Coleman, 2003).

For such ordinary, apolitical social media users the encounter with political news and debate is therefore substantially more likely to be unplanned and serendipitous, through “casual political talk” in non-political contexts (Wojcieszak & Mutz, 2009, p. 50) and the occasional sharing of news items by others in their personal networks. Because they are so unplanned, however, such serendipitous encounters are unlikely to drive filter bubble tendencies: indeed, research using the survey data gathered for the Digital News Report has shown that “those who are incidentally exposed to news on social media use more different sources of online news than non-users” (Fletcher & Nielsen, 2018, p. 2459). This means that, contrary to concerns about the fragmentation of society as a result of filter bubbles, for many users social media have positively increased the diversity of their information diet, and prevent them from becoming locked into ideological monocultures.

Notably, a long-term study of political polarisation in the United States suggests that “the groups least likely to use the Internet experienced larger changes in polarization between 1996 and 2016 than the groups most likely to use the Internet” (Boxell et al., 2017, p. 10612). This would mean that, pace Sunstein and Pariser and in contrast to current moral panics about the impact of social media on political discourse, online and social media have the potential to actively mitigate polarisation tendencies. It should be acknowledged here, however, that such large-scale, longitudinal, survey-based observations may be valid at an aggregate level, but show considerable variation for individual communities and individuals: clearly, as examples like the anti-vaccination activists or AfD supporters show, for some groups it does remain possible to use online and social media to seek out strong homophily and engage preferentially in the development of distinct and divergent ideological positions.

Media and communication studies

Disciplines within the general field of media and communication studies are a particularly rich source of explanations for such divergent tendencies at the individual and group level. Researchers here are perhaps especially likely to consider the sometimes contradictory results of individual studies in the broader context of the overall media ecology, both taking into account the multiplicity of communicative practices and spaces within specific social media platforms and especially also recognising the importance of interaction and information flows across multiple social media platforms and other media channels within the contemporary hybrid media environment (Chadwick et al., 2016).

From this disciplinary perspective, it seems entirely possible that sufficiently motivated (that is, hyperpartisan and polarised) participants on a given platform may engage in communal spaces that, from a network science approach that traces their horizontal linkages, appear as highly homophilous and detached from other communication spaces on the platform – but that these same participants will nonetheless remain embedded in the larger media environment observed by social science research, even if such vertical linkages across platforms and channels remain invisible to computational data capture. In other words, the localised homophily that is likely to exist in specific contexts and spaces does not fundamentally undermine the general heterogeneity of the hybrid media ecology, and cannot usually prevent its participants from encountering – willingly or unwillingly – a broad range of information and perspectives.

To fully detach from this diversity would require considerably more drastic steps: as O'Hara and Stevens describe it, “a networked individual would have to enter the echo chamber and somehow lose his or her diverse connections, replacing them with more and stronger connections within the echo chamber”, in a way similar to the processes by which “people are adopted into cults, brainwashed, and alienated from their contacts, but … this is not a common scenario” (2015, p. 416). Instead, in fact, many of the hyperpartisans with the strongest adherence to extremist views – from anti-vaccination and anti-climate science activists to the right-wing extremists supporting Donald Trump or the AfD – are also highly engaged with the mainstream media, at least in order to monitor what their enemies are thinking: readers of extremist white supremacy sites are significantly more likely to visit the New York Times or similar quality news outlets than ordinary news users, for example (Gentzkow & Shapiro 2011, p. 1823). In other words, their ideology may diverge in extreme ways from the societal mainstream, but they do not exist in a filter bubble by any definition.

Social and political relevance and impact

As early as 2004, David Weinberger remarked that “the problem with an extraterrestrial-conspiracy mailing list isn’t that it’s an echo chamber; it’s that it thinks there’s a conspiracy by extraterrestrials” (2004, n.p.). Concepts like ‘echo chamber’ and ‘filter bubble’ – which, notably, were introduced not by scholars in media, communication, internet, or related fields of study, but by authors working well outside their area of expertise (Pariser is an activist and tech entrepreneur, Sunstein is a legal scholar) – have served to obscure considerably more pressing societal problems, and to misdirect scholarly, journalistic, and regulatory attention to the technological rather than social and societal factors underpinning these problems. While the empirical evidence does not support the existence of echo chambers and filter bubbles as actual, observable phenomena in public communication, therefore, the persistent use of these concepts in mainstream media and political debates around the world has created its own discursive reality that continues to impact materially on societal institutions, media and communication platforms, and ordinary users themselves. As scholars, we cannot therefore simply close the case on filter bubbles and echo chambers and move on to more realistic concerns, but are forced to continue to push back against the simplistic models of connection and communication that these concepts continue to represent.

Even before the advent of contemporary social media platforms like Facebook (launched 2004) or Twitter (launched 2006), Weinberger saw early glimpses of these developments. He referred to the ‘echo chamber’ concept when he noted that the “meme is not only ill-formed, but it also plays into the hands of those who are ready to misconstrue the Net in order to control it” (2004, n.p.), yet this applies just as much to the subsequent ‘filter bubble’ idea: much of the contemporary public debate about echo chambers and filter bubbles has straightforwardly assumed that these phenomena exist in reality and have a significant deleterious effect on society; that they are caused by the new communication technologies of search engines and now especially also of social media; that a particular root cause of the problem lies in the personalisation and recommendation algorithms deployed by these platforms; and that this technological problem must therefore also have a technological solution (Meineck 2018, n.p.). This technologically determinist approach to echo chambers and filter bubbles is despite the fact that – as we have seen in this article, as well as in a series of more detailed critical reviews of empirical research on these phenomena (e.g., Bruns, 2019; Dubois & Blank, 2018; Zuiderveen Borgesius et al., 2016) – there is a pronounced absence of evidence that genuine filter bubbles or echo chambers are real, outside of highly specific and unusual contexts.

This disconnect between the public understanding of and the scientific evidence on these concepts has all the hallmarks of a moral panic, similar to those that have accompanied the transition to almost any other major new communication technology introduced in human history. These moral panics point to the persistence of a simplistic and naïve understanding of media effects both amongst the general public and amongst media and political actors. In this simple view, we are defenceless especially as new and emerging media technologies ‘do things to us’ (change our attention spans, make us more angry, enclose us in filter bubbles); by contrast, the predominant conclusion of media effects research over the last decades has been that new media adoption is always a negotiated process of social construction during which these media are adapted and changed at least as much as media users adjust their own practices. That this message is still not getting through to the general public shows the seductive nature of moral panic narratives, but must also count as a failing for media and communication research.

Further, such moral panics often also serve as part of the rear-guard defence of the old elites that stand to lose the most from any change to the status quo, so it is worth asking cui bono: who benefits from the ‘filter bubble’ meme? Mainstream media have operationalised it to suggest that only their orderly and professional gatekeeping procedures, and not the collective and self-organising gatewatching processes on social media, have the ability to sufficiently inform citizens, for example; establishment politicians have used it to assert that only their well-designed party structures, and not the populist and/or grassroots models of their opponents, can provide strong and stable leadership for their countries. In either case, the claim that audiences and voters diverging from these well-trodden paths do so because they are caught in technologically created filter bubbles and can no longer be reached by reasoned, sensible argument also has the considerable added benefit that the journalists and politicians making that claim are never required to confront their own failings, and to examine other possible reasons for their own declining popularity. It is interesting to note in this context that the relative prevalence of the filter bubble idea in different countries’ political debates may also indicate the depth of broader concerns about their political and democratic processes: it is no accident that the filter bubble and echo chamber concepts originate from the US.

Such moral panics distract us from more important matters; as Sebastian Meineck has put it, it is only “when the tale of the filter bubble bursts [that] the debate about the transformation of the public sphere can get started” (2018, n.p.; my translation). This debate will need to examine whether societies around the world, from Australia to Brazil, from Germany to the United States, are becoming increasingly polarised, or whether such polarisation is simply becoming more visible; that this polarisation is being exploited by a new breed of political actors that employ radical grassroots approaches, offer highly populist solutions centred on strongman leaders, or combine both approaches; and that these transformations severely disrupt and sometimes paralyse existing political systems and undermine fundamental societal consensus. But it will also need to recognise that such transformations are not fuelled simply by surface factors such as the communication technologies and platforms preferred by these new political actors and their established opponents, respectively – rather, they are an expression of far more fundamental social, economic, societal, and political challenges. This does not mean that search and social media platforms are free of fault, of course – indeed, at present there is an acute need to compel them (through regulatory or other means) to do more to remove extremist accounts, prevent the circulation of disinformation, and open themselves to independent scholarly scrutiny. On the specific question of filter bubbles, however, they appear largely free of blame.

One of the few benefits of the ‘filter bubble’ concept – which Meineck, with some justification, describes as “the dumbest metaphor of the Internet” (2018, n.p.; my translation) – is that it has spawned a considerable wave of research that shows the diversity of most citizens’ media uses, and indeed points to the fact that online and social media users consume a particularly diverse news diet (Fletcher & Nielsen, 2018; Anspach, 2017). If societal and ideological polarisation persists and worsens in this environment, then this cannot be caused by filter bubbles or echo chambers, or by the algorithmic shaping of users’ information feeds that has been posited so often as the cause for such phenomena; rather, such polarisation persists in spite of the absence of filter bubbles, and perhaps even because of it: the thorough and direct interconnection across society and societies that online and social media have enabled has only made it easier to observe and express the differences between different social, economic, ethnic, religious, and ideological groups.

Indeed, recent studies have shown that partisan and hyperpartisan users often employ an inherently and staunchly oppositional reading stance as they engage with mainstream media content: they consult such media not to be informed, but to incorporate this new information into their existing picture of the ideological opponent. Worse still, direct attempts by mainstream sources to confront and correct the highly biased worldviews of the partisan fringes – for instance through fact-checking initiatives – only “confirm one’s status as a critical outsider” (Krämer, 2017, p. 1302; cf. Spohr, 2017, p. 151). To put it simply, when conspiracy theorists are told, by those whom they suspect of having orchestrated a conspiracy, that their conspiracy theories are unfounded, this only confirms the existence of the conspiracy. Fact checks may still be valuable to prevent mainstream users from drifting off to the fringes – but for those already on the fringe they only serve to deepen their disconnect from rational public debate.

Conclusion

If there is a filter at all, then, it is not the algorithmic filter postulated by the ‘filter bubble’ concept, which prevents us altogether from seeing ‘different’ content that runs counter to our own worldviews – rather, the more critical filter exists (more weakly formed perhaps in the societal mainstream, more strongly developed on the extreme fringes) in our heads, and variously leads us to adopt dominant, negotiated, and oppositional stances (cf. Hall, 1980) towards the information we encounter from a multitude of sources in our daily engagement with a hybrid, multifaceted, multi-platform media environment. The critical question then becomes why and how different groups in society come to develop such highly divergent personal readings of the same information, and how the ossification of these diverse ideological perspectives into partisan group identities can be prevented or undone – in order to mitigate the very real threat of fundamental societal polarisation, and even of a complete breakdown of the societal consensus. For societies where ideological boundaries align with clear economic, ethnic, or religious divisions, and where bipolar two-party systems prevent the emergence of centrist consensus alternatives to the polarised status quo, this challenge is especially acute.

Phenomena such as homophily (as well as heterophily) can be readily observed in contemporary communicative spaces, as can the algorithmic shaping and personalisation of newsfeeds and information streams (as well users’ efforts to control such shaping). New media and communication technologies have always undergone a process of individual adaptation and social construction; while not neutral, the technologies and their providers are neither inherently good nor evil in this, but can be employed by their users to serve socially and societally beneficial as well as disruptive ends. As scholars, one of our primary tasks is to understand what motivates these individual and collective choices. The ‘filter bubble’ and ‘echo chamber’ concepts, however, with their strong technologically determinist elements, have very little to contribute to the solution of such fundamental challenges; indeed, with evidence for their absence in observable reality continuing to mount, perhaps it is time to allow them to fade into obscurity. Yet while politicians, journalists, technologists, and other stakeholders continue to use these terms as if they describe actual real-life phenomena, and while there is a possibility that they might build on this crucial misunderstanding of the causes of current societal challenges in their development of political, regulatory, legislative, technological, educational, or social initiatives that seek to address them, it remains incumbent on scholars to confront these ill-conceived memes head-on. “The myth of the filter bubble”, above all, “is one thing: a big misunderstanding” (Meineck, 2018, n.p.; my translation). But while that misunderstanding continues to circulate in public debate, so must scholars push back against it: by pointing to the extant studies that debunk it, and by conducting further research that uncovers the actual dynamics of polarisation.

Acknowledgments/Funding

This research is supported by the Australian Research Council Future Fellowship project Understanding Intermedia Information Flows in the Australian Online Public Sphere, Discovery project Journalism beyond the Crisis: Emerging Forms, Practices and Uses, and LIEF project TrISMA: Tracking Infrastructure for Social Media in Australia.

References

Anspach, N. M. (2017). The New Personal Influence: How Our Facebook Friends Influence the News We Read. Political Communication, 34, 590–606. doi:10.1080/10584609.2017.1316329

Batorski, D., & Grzywińska, I. (2018). Three Dimensions of the Public Sphere on Facebook. Information, Communication & Society, 21(3), 356–374. doi:10.1080/1369118X.2017.1281329

Baym, N.K. (2000). Tune In, Log On: Soaps, Fandom, and Online Community. Thousand Oaks, CA: Sage.

Bruns, A. (2019). Are Filter Bubbles Real? Cambridge: Polity.

Bruns, A., & Enli, G. (2018). The Norwegian Twittersphere: Structure and Dynamics. Nordicom Review, 39(1), 129–148. doi:10.2478/nor-2018-0006

Bruns, A., Moon, B., Münch, F., & Sadkowsky, T. (2017). The Australian Twittersphere in 2016: Mapping the Follower/Followee Network. Social Media + Society, 3(4), 1–15. doi:10.1177/2056305117748162

Chadwick, A., Dennis, J., & Smith, A. P. (2016). Politics in the Age of Hybrid Media: Power, Systems, and Media Logics. In A. Bruns, G. Enli, E. Skogerbø, A. O. Larsson, & C. Christensen (Eds.), The Routledge Companion to Social Media and Politics (pp. 7–22). New York: Routledge. doi:10.4324/9781315716299-2

Chong, M. (2018). Analyzing Political Information Network of the U.S. Partisan Public on Twitter. In G. Chowdhury, J. McLeod, V. Gillet, & P. Willett (Eds.), iConference 2018: Transforming Digital Worlds (pp. 453–463). doi:10.1007/978-3-319-78105-1_50

Coleman, S. (2003). A Tale of Two Houses: The House of Commons, the Big Brother House and the People at Home. Parliamentary Affairs, 56(4), 733–758. doi:10.1093/pa/gsg113

Dubois, E., & Blank, G. (2018). The Echo Chamber Is Overstated: The Moderating Effect of Political Interest and Diverse Media. Information, Communication & Society, 21(5), 729–745. doi:1369118X.2018.1428656

Fletcher, R., & Nielsen, R. K. (2018). Are People Incidentally Exposed to News on Social Media? A Comparative Analysis. New Media & Society, 20(7), 2450–2468. doi:10.1177/1461444817724170

Gentzkow, M., & Shapiro, J. M. (2011). Ideological Segregation Online and Offline. The Quarterly Journal of Economics, 126(4), 1799–1839. doi:10.1093/qje/qjr044

Haim, M., Graefe, A., & Brosius, H.-B. (2018). Burst of the Filter Bubble? Effects of Personalization on the Diversity of Google News. Digital Journalism, 6(3), 330–343. doi:10.1080/21670811.2017.1338145

Hall, S. (1980). Encoding/Decoding. In S. Hall, D. Hobson, A. Lowe, & P. Willis (Eds.), Culture, Media, Language: Working Papers in Cultural Studies, 1972-79 (pp. 128-139). London: Unwin Hyman.

Helberger, N. (2019). On the Democratic Role of News Recommenders. Digital Journalism. doi:10.1080/21670811.2019.1623700

Krafft, T. D., Gamer, M., & Zweig, K. A. (2018). Wer sieht was? Personalisierung, Regionalisierung und die Frage nach der Filterblase in Googles Suchmaschine [Who sees what? Personalization, regionalization, and the question of the filter bubble in Google’s search engine] [Report]. Kaiserslautern: BLM; mabb; LPR Hessen; LMK; LMS; SLM; Algorithm Accountability Lab; AlgorithmWatch. Retrieved from: https://www.blm.de/files/pdf2/bericht-datenspende---wer-sieht-was-auf-google.pdf

Marwick, A. E., & boyd, danah. (2011). I Tweet Honestly, I Tweet Passionately: Twitter Users, Context Collapse, and the Imagined Audience. New Media & Society, 13(1), 114–133. doi:10.1177/1461444810365313

Meineck, S. (2018, March 9). Deshalb ist ‘Filterblase’ die blödeste Metapher des Internets [Why the ‘filter bubble’ is the internet’s dumbest metaphor]. Motherboard. Retrieved from https://motherboard.vice.com/de/article/pam5nz/deshalb-ist-filterblase-die-blodeste-metapher-des-internets

Nechushtai, E., & Lewis, S. C. (2019). What Kind of News Gatekeepers Do We Want Machines to Be? Filter Bubbles, Fragmentation, and the Normative Dimensions of Algorithmic Recommendations. Computers in Human Behavior, 90, 298–307. doi:10.1016/j.chb.2018.07.043

Negroponte, N. (1995). Being Digital. New York: Vintage.

Newman, N., Fletcher, R., Levy, D. A. L., & Nielsen, R. K. (2016). Reuters Institute Digital News Report 2016. Retrieved from Reuters Institute for the Study of Journalism, University of Oxford website: http://reutersinstitute.politics.ox.ac.uk/sites/default/files/Digital-News-Report-2016.pdf

Obama, B. (2017, January 10). President Obama’s Farewell Address: Full Video and Text. New York Times. Retrieved from https://www.nytimes.com/2017/01/10/us/politics/obama-farewell-address-speech.html

O’Hara, K., & Stevens, D. (2015). Echo Chambers and Online Radicalism: Assessing the Internet’s Complicity in Violent Extremism. Policy & Internet, 7(4), 401–422. doi:10.1002/poi3.88

Pariser, E. (2011a). The Filter Bubble: What the Internet Is Hiding from You. London: Penguin.

Pariser, E. (2011b) Beware Online “Filter Bubbles“. TED, March 2011. Retrieved from https://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles

Pariser, E. (2015, May 7). Did Facebook’s Big Study Kill My Filter Bubble Thesis? Wired. Retrieved from https://www.wired.com/2015/05/did-facebooks-big-study-kill-my-filter-bubble-thesis/

Rietzschel, A. (2017, July 11). Wie es in Facebooks Echokammern aussieht – von links bis rechts [What it looks like in Facebook’s echochambers – from left to right]. Süddeutsche Zeitung. Retrieved from https://www.sueddeutsche.de/politik/mein-facebook-dein-facebook-wie-es-in-den-echokammern-von-links-bis-rechts-aussieht-1.3576513

Schmidt, J.-H. (2014). Twitter and the Rise of Personal Publics. In K. Weller, A. Bruns, J. Burgess, M. Mahrt, & C. Puschmann (Eds.), Twitter and Society (pp. 3–14). New York: Peter Lang.

Smith, N., & Graham, T. (2017). Mapping the Anti-Vaccination Movement on Facebook. Information, Communication & Society, 22(9), 1310–1327. doi:10.1080/1369118X.2017.1418406

Spohr, D. (2017). Fake News and Ideological Polarization: Filter Bubbles and Selective Exposure on Social Media. Business Information Review, 34(3), 150–160. doi:10.1177/0266382117722446

Sunstein, C. R. (2001). Echo Chambers: Bush v. Gore, Impeachment, and Beyond. Princeton: Princeton University Press.

Weinberger, D. (2004, February 21). Is There an Echo in Here? Salon. Retrieved from https://www.salon.com/2004/02/21/echo_chamber/

Zuiderveen Borgesius, F. J., Trilling, D., Möller, J., Bodó, B., de Vreese, C. H., & Helberger, N. (2016). Should We Worry about Filter Bubbles? Internet Policy Review, 5(1). doi:10.14763/2016.1.401

Add new comment