Introduction
[Metal Gear Solid 2: “Everyone withdraws into their own small gated community, afraid of a larger forum. They stay inside their little ponds, leaking whatever “truth” suits them into the growing cesspool of society at large. Different cardinal truths neither clash nor mesh. No one is invalidated, but nobody is right. Not even natural selection can take place here. The world is being engulfed in “truth.””]
Metal Gear Solid 2 was released two months after 9/11. The attack was captured by a variety of narratives in western culture. One of the most significant being the one made by the then U.S. president George W. Bush, at the launch of his anti-terrorism campaign: “you’re either with us, or against us.”[i]
But: What’s “us”? What’s “them”? What is the history here? These questions are better left unquestioned in the spirit of vengeance. The line between good and evil shouldn’t be clearer in the time of war.
Two decades later, the enemy has become closer. Today, with the same sense of righteousness, one may find their neighbors’ beliefs contemptuous, almost incomprehensible. Who are these people? We see them wearing their stupid hats and chanting their brainwashing slogans. We enjoy to see them bumbling to their funny demise.
The above passage from Metal Gear Solid 2 seems to have predicted something. People today appears to have very different worldviews, and not just in value systems, but a difference in what each considers to be ‘facts’: which led some to call our time the “post truth era,” where we seem to stay in our little ponds, assimilating whatever truths that suits it. In this environment, who has the right to say who’s “the people” and who’s the threat? Who’s “normal” and who’s deviant? Who’s “rational,” and who’s brainwashed?
This video identifies the current mechanisms sustaining these little ponds. Not all ponds are the same, however. Each of them is located on a spectrum of differing intensity. On the deep end, there are echo chambers. These are essentially belief systems that tells their members to distrust the outside. Cults and conspiracy theories, for instance, would be closer to the echo chamber extreme. On the shallow end, there’re epistemic bubbles, better-known as filter bubbles. Filtering is a baseline reality, a part of our everyday lives: we’re limited beings that select and omit information in our surroundings. But this creates problems, which position us to slide into the deep end under certain conditions.
This video starts on the deep end: on the mechanisms specific to echo chambers, which are more visible in our culture. Then, we gradually walk to the shallow end: the routine reality of filtering, a much more elusive and universal condition, and one which should prove to be the opposite of reassuring.
Echo Chambers
Consider the following case. Edgar Welch, a character in the Pizzagate story, forcibly entered the restaurant Comet Ping Pong with a rifle. He wanted to investigate whether the restaurant had held child slaves, which is a part of a conspiracy theory that designated the democratic party officials of the US as participants of a human trafficking ring. Well, he didn't find any child slaves, and then gave himself up to be arrested. The conspiracy theory community however, took the event as a fake, a fabrication of mainstream media, and labelled Welch a paid actor used to discredit the theory,[ii] I’m sure to Welch’s dismay.
So, what are we seeing here? What we’re seeing is the mechanisms of an echo chamber at work. An echo chamber is any belief system that involves the idea that sources and testimonies outside of that system are not to be trusted. In other words, an echo chamber holds ideas that are designed to discredit the external sources that assume a different view. This mechanism is called evidential preemption.[iii] For instance, for conspiracy theory communities such as 9/11 truthers, anti-vaxxers, and climate-change deniers, any testimony from the science community are, by their very belief systems, coded or preconceived as untrustworthy. Thus, any contrary testimonies produced by those external sources cannot damage these belief systems in any way.
It doesn’t stop there. Not only are echo chambers impervious to contrary testimonies, contrary testimonies actually fortify the echo chambered beliefs. This closely-related protective mechanism is called disagreement-reinforcement.[iv] Which is expressed through the idea that contrary testimonies are not only false and untrustworthy, but the sources of those testimonies are motivated by hidden and malicious agendas: one key agenda being to discredit the belief system of the echo chamber community itself.
In other words, echo chambers also involve a kind of paranoiac persecution-complex that pre-explains outside ideas as “out to get them,” and therefore each exposure to contrary testimony would in fact serve to fortify the beliefs of its members. This makes sense if you’re already lodged in a chamber. For instance, if you already believe that 5G is somehow related to mind-control, it is reasonable for you to believe that those who tell you that it is not are lying to you, gaslighting you, and that they’re motivated by the hidden motive of eventually realizing mass scale mind-control. Therefore, it makes sense for them to undermine you, because you’re disrupting their plans. Your belief of your theory would be stronger every time you encounter the contrary views, and when talking to others in the same echo chamber, the community would share the same stories of being gaslighted by everyone else: thus, collectively deepening the conviction about the theory.
Similarly, if you’ve already lodged in the Pizzagate conspiracy theory, the report that a believer investigated Comet Ping Pong, to have found no signs of human trafficking, and was then arrested, could only be interpreted or coded as a ploy to discredit your belief system and community, which “confirms the suspicion” that you’ve had all along, which would lead you to further conclude that your enemies are more powerful and resourceful than you had previously estimated.
The combined effect of the mechanisms of evidential preemption and disagreement-reinforcement is precisely that in each failure to explain a phenomenon, the theory would immediately reconceive and enhance the power of its enemies, rather than changing the theory itself (which is what evidence-based theories do). I’ll label this mechanism as externality mystification.
Why can’t we find any evidence that the moon-landing footages were fabricated? Because the government hides it too well. For the Nazis, the socioeconomic crisis of interwar Germany was blamed on the Jews, whom they believed were secretly controlling Germany. For antisemitic conspiracy believers today (again, Nazis), one of the reasons why Nazi Germany lost the war was because the Jews were simply too resourceful. For the American and British liberals who tried to explain their losses in 2016, this means the mystification of factors such as Russian cyber interference and Cambridge Analytica microtargeting.[v] Paranoiac theories about communists infiltrating the universities and the government would also be a good example. So, its a visible pattern that echo chambers, whether in the forms of a conspiracy, cult or political movement, are defined by robust and hysterical conceptions of their enemies.
So, any echo chamber is defined by at least one of the three mechanisms. Evidential preemption, which is the coding of external contrary testimonies as false and untrustworthy. Disagreement reinforcement, namely the coding of contrary testimonies as motivated by hidden agendas, often to discredit the echo chamber, thus further fortifying the chamber. And externality mystification, which is the recoding of external forces as more powerful than the chamber had previously estimated when the chamber fails to explain something.
Conspiracy Thinking
Now, as a sidenote: echo chambers are consistently associated with conspiracy theories. Which makes sense since most conspiracy theories operate through said mechanisms. But conspiracy theories aren’t always echo chambers. Conspiracy theories could sensible, even virtuous. In other words, they aren’t necessarily echo chambers, even though there are significant overlaps.
How could conspiracy theorizing be sensible? First, they are sensible if they’re proven by facts. For instance, in the Athens County Rebellion of 1946,[vi] a militia comprised of World War II veterans overthrew a corrupt local government in Athens and Etowah, Tennessee, after having correctly figured out that the local officials were a part of a political machine and had no intention of allowing fair elections.[vii] Also for instance, the FBI was indeed involved in organized assassination of Blank Panther leaders such as Fred Hampton in the 60s, which was proven by a succession of declassified FBI documents.
The second sensible aspect of conspiracy thinking is that it would be, well, foolish to assume that the world is simply a reflection of what the official story says it is, even when we do not have sufficient evidence to prove otherwise.[viii] It is intellectually virtuous to regard the official account of things with some degree of suspicion. And by extension, to label conspiracy theorists as merely ‘stupid’ is in fact a problematic move, because that accusation relies on the assumption that one’s absolutely free in one’s life trajectory, that they’re independent of their social-historical situation. I’ll get to this point later in the section on the path-dependence of one’s intellectual life.
Another sidenote is that categories such as conspiracy theories, cults, ideologies and pseudosciences exist in a family resemblance to each other. That is, they have some similarities, without a single feature to define them all. But they all operate through at least one echo chamber mechanisms. For instance, Creationism as a whole explicitly assumes itself in opposition to secular science, which is held by the modern state. Intellectual leaders of a creationist community would label secular science as untrustworthy, which would make it an echo chamber. The paleo diet and climate-change denial movements work through the mechanisms of the echo chamber, but they are more pseudoscience than conspiracy theories or cults. Scientology would be a classic cult, and also a pseudoscience (namely ‘dianetics’) in their crusade against psychology, and it can also be a conspiracy theory in defining the state as a threat, in which case they’d be correct. QAnon would be a cult, a conspiracy theory, ideology, and pseudo political-science about how the US government works, such as the idea of the “Q clearance.” Neoliberalism would be an ideology, and a pseudoscience, and a cult, but not exactly a conspiracy theory.
Filter Bubbles
Now let’s proceed to the shallow end: filter bubbles. Filter bubbles is the basic reality in which we all inhabit, of which echo chambers are its more dramatic and visible cases. In other words, all echo chambers have the qualities of filter bubbles, but not all filter bubbles have the qualities of echo chambers. (Also don’t let this diagram confuse you. The size of the circles doesn’t indicate their quantity. Full disclosure, I got confused.)
So, what is a filter bubble? The mainstream definition is that its a state of intellectual isolation, produced by the personalized informationfiltration process of social media algorithms, by which some information are excluded by omission. This process involves the tracking of an individual’s search history on a search engine, their likes and clicks, and other potential data-points such as geolocation. The aim being to produce for the users an experience that reflects their own interests, preferences, and beliefs.[ix] And it leads individuals to congeal their intellectual life in more or less fixed habitats, isolated from sources that hold different ideas.
In this conventional account, a worrying aspect of this process is that it operates outside of individuals’ conscious decisions. That is, your information diet is fed by the algorithms without you consciously knowing the backend of the algorithms, and in most cases, the ways in which the algorithm selects information for you is entirely not transparent. In short, within the bubbles, you won’t notice that a platform is not a window to the world, but a mirror for everyone’s individual desires.
This is an insightful concept, but I think there are two problems to this conventional understanding.
First, filter bubbles tend to be defined as an individual’s private reality. Which is suggested in the more well-known term of personalization. But this neglects that the existence of filter bubbles already implies the individual’s involvement with a network of people sharing similar bubbles. The concept then should be better understood as a social or group phenomenon.[x] So, after years of using the internet, you now have a habit of reading from a familiar set of sources, ones that match your overall beliefs, and on which you’re dependent to interpret world events in the perspective that resonates with you. After years on the internet, you’ve maybe accumulated a network of friends and acquaintances who share your beliefs, tastes, and preferences; you’re dependent on them to post and repost opinions that would also be of interest to you, and they would probably see you the same way. This intellectual isolation is not a personal isolation, but an isolation of social groups.
The second problem of the conventional understanding is it suggests that filter bubble is a technologically mediated process: namely something that takes place only on the internet, specifically on social media platforms such as Facebook and Twitter. But that’d be misleading. Because all social networks are formed through some agreements in beliefs and interests; and that’s an ordinary, baseline social reality that’s happening all the time without and outside of the internet. This ordinary filtration process could take place through a variety of social contexts: such as friend groups at schools, the workplace, academic circles, hobby groups, religious groups, people who share your political alignment, and so on.
From hereon, I’ll use C. Thi. Nguyen’s reworked concept of epistemic bubble instead of ‘filter bubble,’ which refers to the broader “ordinary processes of social selection and community formation” through which “some information is excluded by omission.”[xi] If we’re honest with ourselves, most of us live in at least one epistemic bubble in some capacity. So, it’s more accurate to say that, rather than creating them, the internet facilitates epistemic bubbles, it makes the bubble formations more visible (to both humans and machines).
We may ask two questions here: Given that epistemic bubbles are universal, what could be some of its problems? And secondly, why do we find ourselves in these bubbles all the time? We could approach these questions through many angles. Here I’ll present three: the individual, the social, and the historical angle. That is, I’ll proceed from the immediate to the speculative.
The Individual Angle: Simplification
First, I’d argue that individual human beings desire a coherent worldview to explain everything. This desire is however frustrated by our limited time, cognitive resources, and the complexities and contradictions of the world itself. So, we’re motivated to compromise by resorting to simplifications. These aren’t necessarily conscious decisions on the part of the individual, but it’s my theory that such is the original drive that sets the bubbles in motion. And it’s this drive toward coherence that compels some people to adopt conspiracy theories, dime-store mysticisms, or any simplified worldviews that involves the forces of good and evil, us and them, and the inscription of clear intent onto these actors. For instance, like conspiracy theories, any narrative of nationalism has these qualities, especially in the times of war or cold war, where the enemy (such as a foreign state, a terrorist group, or a race) are portrayed as a force of evil, hell bent on incredible projects like world domination or “destroying our way of life” and so on.
Theoretically, epistemic bubbles could be benign: they’re formed by the fact that we know that we’re unable to process all the complexities in the world, and therefore we became reliant on various sorts of mediations to help us understand things in a simplified way. This mediation may come from the people around us, opinion influencers, or (literally) the media. Even theoretically however, epistemic bubbles lack what’s been called coverage-reliability.[xii] Which is when your filtered network leaves out relevant facts about the world, and thus fails to present significant ideas to your attention. This failing is not the fault of the individual, for no individuals could know everything, it’s simply a problem of epistemic networks in general.
But any actually existing epistemic bubble is messy and filled with problems. The basic structure of a bubble, again, relies on one’s dependence on simplification. This dependence at the individual level involves at least two problems: the fallibility of our cognitive fluency and the fallacy of corroborative bootstrapping.
First, cognitive fluency. This is the theory that we’re more likely to accept an idea if its easily comprehended. And more interestingly, the familiarity to an idea contributes to the ease of comprehension.[xiii] In other words, it’s easier to believe in an idea after being repeatedly exposed to it. One study shows it like this: a group of people were shown an idea and asked to rate its reliability, to which they give their ratings; and when they were shown the same idea a month later, they perceived it to be more reliable.[xiv] Well, this repetition-to-belief pipeline can obviously be manipulated. After all, it’s one of the foundations of propaganda projects. I shall label this technique the gaming of cognitive fluency.
This works along with the second factor: the fallacy of corroborative bootstrapping. This is when you select for yourself multiple sources that present the same opinion, which gives you the impression that these voices are created independently from each other, and thus lead you overweight the truth of what’s effectively the same opinion.[xv] For example, if you read about the same coverage of an abortion related controversy from Breitbart, then Fox News and then Epoch Times, you’d get the impression that this one opinion about it is being corroborated by multiple sources. Same goes for reading about the Cambridge Analytica story from say New York Times, the Guardian, and then Vice News. Here, the users have effectively tricked themselves to believe that the news from these sources are produced independently from reach other, thus corroborating on the same ‘truth.’ But what really happened was the users have preselected their sources to match their own alignments.
The committing of this fallacy however, is not entirely the fault of the individual. For the fallacy dovetails with pack journalism from the perspective of news production, which is when reporting from different news outlets would present the same angle on an issue using the same sources. Which may easily mislead the audience to believe that these are independent voices, thus nudging the audience to commit the bootstrapping fallacy. Pack journalism may sometimes be an organized practice, other times disorganized or distributed but united by an ideological alignment.[xvi]
In our time of opinion-influencers, which is a very decentralized opinion-distribution paradigm, only ideology is the force that aligns them, which makes their pack journalism tendency fairly visible. This is supplemented by the economic motive to cooperate, such as giving mutual shout-outs or be on each other’s podcasts. On the consumer end, the users would enter into a bubble composed of a network of similarly aligned opinion-influencers, corroborating to the same set of truths and values. People today routinely engage with this second layer of mediation, and here the practice of bootstrapping is more visible, because the ideological alignments of opinion-influencer networks themselves are deliberately visible as their branding practice.
The Social Angle: Relating
To begin the social angle, we could continue and expand the point on opinion influencers, one which is specific to our new media environment. This involves one’s routine consumption of these outlets such that one develops what’s now called parasocialrelationships to these influencers. Parasocial interactions aren’t new. Many past cultures, and some still, involve people idolizing household deities or being fans of some interesting people. In the advent of television, this type of social relations has gained intensity through the proliferation of mass visual culture, in which repeated exposure to the images and videos of media personalities (back then mostly actors and actresses) primes the individual to develop private fantasies[1] of admiration, intimacy, friendship, love-hate relationships, and/or self-modelling about these personalities.[xvii]
Specific to the field of social and political commentary, this means looking up to someone to articulate simplified opinions about the complexities of the world that resonate to one’s own preexisting worldview. Rush Limbaugh has been considered as the pioneer of the opinion-influencer phenomenon.[xviii] Starting from 1988, Limbaugh’s three-hour-a-day radio program has been nationally distributed in the United States. Besides the obvious points about his strong conservative positions, what’s more interesting and innovative about this new format is that it humanizes the influencer. Limbaugh builds his brand by identifying himself a regular man, speaking for the regular person, on the side of the “real people,” speaking truth to power, and firmly on side of the good, reason, and neutrality.
Today, this set of rhetorical tactics is ubiquitous for internet opinion-influencers. Every-one of them has a face, has a voice, has a style, has a personal mission, has quirks, has evocative personal backstories of struggles and traumas, has amusing anecdotes, and each of them proclaims that they’re on the side of reason and neutrality. Some of these figures go through publicized life-struggles, deepening their audiences’ identification with them. This format of parasocial relation development is an intensification of the previous print media personalities such as Hunter S Thompson, and definitely a departure from faceless news article writers and the contained demeanors of news anchors. Humanization, or more precisely, self-humanization, creates the effect of parasociality, which interpellates both the influencer and their audience as associates of the same cause. Which is arguably a more effective form of opinion distribution and consumption. Their worldview is their brand is their product is your adoration is your routine is your worldview.
Now, as mentioned, human social interactions are the grounds on which epistemic bubbles form. Social dynamics is complex to say the least, and the internet layer multiplies this complexity.[xix] Here, I’ll suggest some ways to look at it, but it is by no means exhaustive. First, as mentioned, we form social networks outside of the internet with people with similar interests, values, politics, occupations, hobbies, and many other factors. In this offline world, we are also located in various social systems all at once—the family, the school, the workplace, the law, and so on—which mores and demands leave limited room for us to display our interests and views, let alone acting outside of our routine identities.
The online world, on the other hand, is a spatially distributed network in which we could encounter people similar to us with greater ease. It partly diminishes the pressures of the offline world by affording us a mediated space to associate with like-minded people, to cohere into networked communities of shared interests and values, namely affinity networks, or for our purpose, epistemic bubbles. This process of forming epistemic bubbles is both conscious and unconscious, and the possible paths by which each individual enter into bubbles are potentially infinite. This could be experienced as a learning and self-exploration process, but almost always, in one path or another, individuals would eventually find themselves in online networks identifiable by some shared interests and values. The more intense forms of these networks are sometimes formalized as groups. They could develop their own rules, lore, inside-jokes, slangs, and so on.
But ideological alignments are beyond groups: they exist through distributed networks of people defined by the difference of their ability to influence and comply, lead and follow. At an abstract level, everyone is a relay of opinions, acting as points of filtering. This filtering takes the form of binary decision-making: to share or not share, to display approval or disapproval. The creation of ‘content’ makes you the source of an opinion, and your capacity as a distributor depends on your audience count (or following) and the novelty of your content (or the X-factor). For instance, a viral Tweet made by a low audience account would always have high novelty (my tweet on PhD villains). And a low novelty item distributed by a highly influential account is still effectively viral. (NYT article).
Epistemic bubbles, specifically regarding sociopolitical views, are defined by alignments in a cloud-like, always-mutating network of relaying nodes. Today, the quantity of opinion-relaying is still pretty much determined by the ebbs and flows of long-established news outlets. The second place in the chain would be influencer networks reacting to mainstream accounts. This is an intensely complex ecology that defies simple representation. But it doesn’t stop anyone from simplifying it with crude conceptions such as liberals versus conservatives. And in turn the world unfolds through our conceptions of it anyway. So, nothing is lost. Most of us experience this system as a skinner box of pleasure: fighting ideological turf wars, getting likes and shares by like-minded people, feeling one’s existing beliefs confirmed in ways one hadn’t thought of, or simply enjoying the presence of other people.
[present diagram] So, all epistemic bubbles involve at least one of the mentioned mechanisms: the lack of coverage-reliability, the gaming of cognitive fluency, the committing of the fallacy of corroborative bootstrapping, developing parasocial relationships with opinion-influencers, and the pleasure involved in social activities. Note that these mechanisms are all involved in echo chambers as well. What differentiates echo chambers is their unique set of mechanisms: namely evidential preemption, disagreement-reinforcement, and externality mystification.
This categorization should not mislead us that the two are mutually exclusive. Instead of seeing the two as a binary pair, they are more like a spectrum of varying intensities. Epistemic bubbles could gain intensity, and gradually become echo chambers, and vice versa. Also, their existence seems to us as a given, which demands us to ask why do they emerge in the first place. This requires us to depart our attention from a merely synchronic or ahistorical view, and instead, consider the diachronic or historical perspective.
The Historical Angle: Path-Dependence
So epistemic bubbles are not abstractions. The emergence each is determined by a unique combination of historical forces. In other words, these belief systems don’t come from nowhere, and they’re never entirely senseless. For example, there’s no epistemic bubble for firefighting, but there’re a lot of them in politics, journalism, and academia.
The formation of bubbles could be approached from two angles. One is to speculate on social history, the other the personal history of an individual. We could view the two together, through what I’ll call a two-step path-dependencymodel. The first step concerns the environment into which one is born. This involves factors such as parental beliefs, economic strata, identity makeup, local culture, and a host of other factors. This is already an offline epistemic bubble that primes a person to the second step: which is how their lives would unfold in an online algorithmic environment.
So to begin: most people are born into households, in which the parent assumes the role of rearing the child, which is where the inheriting of habits and ideas take place. The parent is not an isolated entity, but is embedded in a local culture, which is embedded in a larger social environment, which is located on a planet, which is in a solar system, which is in a galaxy, and so on and so on. So, one learns from one’s parents, and often from their local community. This process already forms the basis by which one’s involvement on the internet would unfold.
For example, if your parents are Evangelical Christians and you became their copy, your internet experience would at first throw at you a variety of opinions. Here, you are likely to be attracted to opinions that align to your existing worldviews, such as anti-abortion, which would eventually land you into a network of like-minded people and opinion-influencers, namely an epistemic bubble. So, at the level of experience, a person’s offline education and online engagement form a kind of seamless continuation. Of course, the internet could destabilize your preexisting worldview, perhaps will even convert you to something else, and the topic of personal transformation through the internet is its own interesting topic. But here I focus on just the one scenario: what is happening when people don’tdeviate much from their starting points.
These two steps, offline then online, are underscored by an obvious but troublesome fact that: the chronological orderby which one acquire ideas is highly significant to one’s intellectual trajectories. It means that once someone as acquired a belief, they tend to react to counter-arguments with greater scrutiny, if not hostility, and would go easy with arguments that support their existing views. In the ideal scenario, we should form our opinions by considering all the accessible evidences around us; the chronological order by which we encounter these evidences should not be relevant to forming our opinions. This is called the commutativity of evidence principle.[xx] Reality however, pushes us to violate this principle almost universally. The intellectual trajectories of all of us are quite path dependent. We acquire new opinions based on the things we already believe in, which is dependent on our previous decisions and experiences, which are in turn conditioned by a host social and historical forces outside of our awareness and control. We’re entangled in this path dependence fallacy almost entirely unconsciously. Mostly because we are unable to perceive them. You cannot decide on something if you cannot see them.
Nothing sets us to a path more firmly than our early life environment: parental values, habits, education, socioeconomic status, racial and other identity markers. All of them outside of our control. To put it in dramatic terms: we don’t know we’re in epistemic bubbles because we’ve always been in one since birth. And many may lead their lives never deviating from their starting point in any meaningful way.
This is where the algorithms would continue our existing path dependence. The consumer internet experience nudges the user to stay in their existing bubbles.[xxi] Your online signature, such as your search record on Google and engagement history on Facebook, composes your profiles on these platforms. These data are then used to suggest content and advertisements that appeal to you. We can immediately identify two problems in this arrangement. One is individual privacy, which has been gradually addressed by these company’s adoption of depersonalized (anonymized) data capture: which aims to know only what your desires are without knowing who you legally are.
The second concern is that it creates, namely, epistemic bubbles, or specific to search-engines, filter bubbles. Which is when the algorithm sets the user down to a content path-dependence. Where the user won’t even be aware that the content they’ll be receiving in the future, perhaps forever, is defined by their early search decisions. If you start off by searching for opinions against abortions, you’ll keep finding content of that alignment. You’ll be discovering more arguments against abortion, ones that you haven’t considered before, thus reinforcing your original opinion. Further, single-issues such as abortion would open the user to an alignment of opinions on a host of issues, such as those against gay rights and taxation. Or, if you already believe that Trump’s election was influenced by the Russians, not by America’s domestic problems, then it’d be difficult to convince you that Cambridge Analytica’s microtargeting program actually didn’t work. Which means, to repeat a point I made earlier: you won’t notice that a platform is not a window to the world, but a mirror for everyone’s individual desires.
This filter bubble condition is quite murky in reality: because most people actually want to be fed information that appeals to their interests and beliefs. And at the same time, platform companies want to design the conditions for users to keep on using their platforms, namely to give the users what they want. After all, most of their revenues[xxii]come from selling advertisement space: the platforms want your consistent attention. Of course, there are more variables involved, but the implicit pact boils down to: we want what we want to see, opinion-influencers supply that, and these internet platforms provide a smooth space in which production and consumption could become routine.
Conclusion
If you’ve noticed that I haven’t addressed the social-historical angle in any concrete ways, you’d be correct. What I addressed was only the second of the two-step path-dependence model, the one about a hypothetical person’s setting foot in the algorithmic environment. The fact is, any actual social historical account is inherently particular. Meaning that we can’t exactly make sweeping abstractions about how particular bubbles were formed. Everything I talked about—evidential preemption, parasocial relationship building, cognitive fluency and so on—are the common or formal mechanisms operating in epistemic bubbles. But these mechanisms say nothing about how the bubbles themselves were formed, which is a historical question.
Why is there an Evangelical Christian community in America in the first place? Why the opinion-influencer networks of different ideologies have an audience at all? Why were prestige media so intent on attributing Trump and Brexit to Russian interference? Why is there a bipartisan rhetoric against China, Russia, and Iran in the west? Why is there a lack of trust in the government and prestige media in the first place, which in turn primes people to be attracted to conspiracy theories? These are more historical questions than about formal mechanisms of epistemic bubbles.
Another important angle I did not get in detail is how opinion productions are mediated by economic interests at a variety of scales, in addition to the political-geography of ideological messaging. But that may be a topic for a future video.
Finally, the million dollar question: is there a way out? C Thi. Nguyen proposed an escape route for those trapped in echo chambers.[xxiii] Which involves a radical “reboot” of one’s belief system: that is, to abandon one’s existing beliefs entirely, and start from scratch. But this requires someone to earn the trust of those within the echo chamber, so they could be self-motivated to change their minds at all. But this technique could only work from person to person, which is a small-scale, delicate and time-consuming task. Further, Nguyen expressed pessimism regarding possibility of applying this method at an institutional and social scale.
I think there’s a second problem: there’s no such thing as being outside of the ponds. The moment we think we’ve escaped a pond, we would find ourselves in another one. [Zizek clip: “The tragedy of our predicament when we’re within ideology is that when we think we escape it, to our dreams, at that point, we are within ideology.”] After Christian Picciolini freed himself from the deplorable neo-Nazi movement, he… ends up on a Ted Talk, promoting platitudes of positivity and tolerance, which belong to another set of ideology.[xxiv]-[xxv]
In this video, I’ve identified the mechanisms. And that’s about all that I can do.
———————
———————
[i] Bush, George W. 2001, Sept 21. “You’re either with us, Or with the terrorists.” archive.org. https://web.archive.org/web/20150112170258/http://www.voanews.com/content/a-13-a-2001-09-21-14-bush-66411197/549664.html.
[ii] Menegus, Bryan. 2016, May 12. Pizzagaters Aren't Giving This Shit Up. Gizmodo. https://gizmodo.com/pizzagaters-arent-giving-this-shit-up-1789692422
[iii] Begby, Endre. 2020. “Evidential Preemption.” Philosophy and Phenomenological Research. 00:1-16. DOI: 10.1111/phpr.12654
[iv] Nguyen C. Thi. 2018. “Echo Chambers and Epistemic Bubbles.” Episteme 17(2):141-161. DOI: 10.1017/epi.2018.32
[v] Herman Edward S. 2017. https://monthlyreview.org/2017/07/01/fake-news-on-russia-and-other-official-enemies/
[vi] Wikipedia. Battle of Athens. https://en.wikipedia.org/w/index.php?title=Battle_of_Athens_(1946)&oldid=939446850
[vii] For an overview of the virtues in conspiracy thinking, see Stroll, Brawen. Virtuous Conspiracy Theories. https://www.academia.edu/45673447/Virtuous_Conspiracy_Theories_Branwen_Brigid_Stroll_BA_Student_Philosophy_Simon_Fraser_Stroll_1.
[viii] For an elaborate treatment on the rationality of conspiracy thinking, see Dentith, Matthew R. X. 2019. “Conspiracy theories on the basis of the evidence.” Dordrecht 196(6):2243-2261. DOI: 10.1007/s11229-017-1532-7
[ix] Pariser, Eli. 2011. The Filter Bubble: What the Internet Is Hiding From You. London: Penguin UK.
[x] Nguyen 2018.
[xi] Ibid.
[xii] Goldberg, Sanford. 2010. Relying on Others: An Essay in Epistemology. Oxford: Oxford University Press. Cited in Nguyen 2018.
[xiii] Oppenheimer, Daniel M. 2008. “The Secret Life of Fluency.” Trends in Cognitive Sciences 12(6):237-41. DOI: 10.1016/j.tics.2008.02.014
[xiv] Nguyen, C. Thi. 2019. The Gamification of Public Discourse. 10:33 – 10:52
[xv] Nguyen, C. Thi. 2010. “Autonomy, Understanding, and Moral Disagreement.” Philosophical Topics, 38(2):111-129. DOI: 10.5840/philtopics201038216
[xvi] Kiernan, Vincent. 2014. “Medical Reporters Sau 'No' to 'Pack' Journalism.” Newspaper research journal 35(2):40-54. DOI: 10.1177/073953291403500204
[xvii] Rubin, Rebecca B. and Michael P. McHugh. 1987. “Development of Parasocial Interaction Relationships.” Journal of Broadcasting & Electronic Media. 31 (3):279–292. DOI: 10.1080/08838158709386664
[xviii] Benkler, Yochai. 2020. “A Political Economy of the Origins of Asymmetric Propaganda in America Media.” Pp.43-66. In The Disinformation Age: Politics, Technology, and Disruptive Communications in the United States, edited by W. L. Bennett and S. Steven Livingston. Cambridge University Press.
[xix] Curran, James, Natalie Fenton, and Des Freedman 2016. Misunderstanding the Internet. Routledge Press. See also Aigrain, Philippe. 2012. Sharing: Culture and the Economy in the Internet Age. Amsterdam University Press.
[xx] Kelly, Thomas. 2008. “Disagreement, Dogmatism, and Belief Polarization.” Journal of Philosophy 105(10):611–33. DOI: 10.5840/jphil20081051024
[xxi] Alfano, Mar et al. 2018. “Technological Seduction and Self-Radicalization.” Journal of the American Philosophical Association (2018):298-322. DOI: 10.1017/apa.2018.27
[xxii] Reyes, Soto M. 2020, Dec 3. Google, Facebook, and Amazon will account for nearly two-thirds of total US digital ad spending this year. https://www.businessinsider.com/google-facebook-amazon-were-biggest-ad-revenue-winners-this-year-2020-12
[xxiii] Nguyen, C. Thi. 2018. Escape the echo chamber. Aeon. https://aeon.co/essays/why-its-as-hard-to-escape-an-echo-chamber-as-it-is-to-flee-a-cult
[xxiv] Christian Picciolini. 2017. My descent into America's neo-Nazi movement & how I got out. TED talk.
[xxv] Zizek, Slavoj. 2008. “Tolerance as an Ideological Category.” Critical Inquiry 34(4):660-682. DOI: 10.1086/592539
Great content bro! Your videos are engaging.