Escaping the Echo Chamber

7274b1e2-ec40-491a-8018-1aa630da031d.png

How clearly do you see the world? It's a question that has troubled humanity for millennia. And yet, for most of that time, the problem was conceived in terms of scarcity. Real knowledge was not just hard to acquire but hard to access and disseminate. Our present wealth of information would strike our curious ancestors as the answer to their prayers. Indeed, this optimism about the power of open and abundant information seems to have persisted all the way up until the dawn of the information age. It's a vision that inspired many of the earliest digital technology pioneers. Even now, as the US reaches unprecedented levels of polarization, we reflexively assume that the problem is a lack of information (on the part of the other side). They're trapped in filter bubbles, we say; they're not being exposed to better ideas.

In a weak sense, this is surely true. Few among us have the time, bandwidth, or inclination to verify every claim or track down the best arguments for every position. We rely, of necessity, on certain authorities, and the blindspots of those authorities may become entrenched in our worldviews. But the idea that our civic discontents would be ameliorated if only we could puncture our filter bubbles is becoming harder and harder to defend. In fact, as we discussed in an earlier newsletter, there is some evidence to the contrary: one study found that exposure to opposing views online actually increased polarization. And while the famous "backfire effect" has proven difficult to replicate, the sheer rate at which we clash with one another online demonstrates that mere exposure does not produce ideological harmony.

The philosopher C. Thi Nguyen points out one problem in our understanding of this topic: we often conflate epistemic (filter) bubbles and echo chambers. The former are characterized by the exclusion of outside ideas; the latter, by the active discrediting of outside sources. While both exist, echo chambers are the more pervasive and pernicious phenomenon. One reason our efforts to reduce polarization by popping filter bubbles have failed, Nguyen argues, is that such bubbles rarely exist in the pristine sense that we might have imagined. Echo chambers, by contrast, are widespread and can persist or even strengthen in the face of outside information. Inhabitants of echo chambers, Nguyen writes, "are isolated, not by selective exposure, but by changes in who they accept as authorities, experts, and trusted sources." Leaders in these groups engage in "evidential preemption": they discredit outside sources and facts in advance, converting any challenge to their views into further proof of the hostility of the outgroup. Arguments of this kind are easy to spot for their structure: "So and so would say that; he/she is a _____(member of the lamestream media, white male, SJW, etc.)."

The negative effects of echo chambers have been exacerbated in recent years by the collapse, both real and perceived, of institutional authority. Traditional authorities like journalism, academia, and government are now objects of considerable doubt and polarization. Though real biases exist in these institutions, information abundance alone is sufficient to sow such doubt. Martin Gurri summarizes the condition perfectly:

Uncertainty is an acid, corrosive to authority. Once the monopoly on information is lost, so too is our trust. Every presidential statement, every CIA assessment, every investigative report by a great newspaper, suddenly acquired an arbitrary aspect, and seemed grounded in moral predilection rather than intellectual rigor. When proof for and against approaches infinity, a cloud of suspicion about cherry-picking data will hang over every authoritative judgment.

The problems of echo chambers, filter bubbles, and institutional mistrust are generally framed epistemically; they're questions about truth and how to spread it. But even if we could agree about a critical mass of individual facts, we still face a problem of valence or significance. Because we craft our own digital environments, we tend to find ourselves surrounded by the kinds of facts that correspond to our pre-existing concerns. These facts build up for us a tacit sense of what the most pressing problems are, which in turn forms the background model for any political or moral expression we make on social media. These expressions are then received by others in a decontextualized manner and can clash with their respective background models, furthering polarization even when no specific facts are in dispute.

Consider the following hypothetical example. There are two Twitter users, one of whom ("X") is strongly concerned about mob justice and online pile-ons and the other of whom ("Y") is strongly concerned about the silencing of sexual assault survivors. Each inhabits a Twittersphere that surfaces the most egregious examples of their respective concerns. X does not support silencing sexual assault survivors; Y does not support online mobs.

X might tweet, "Cancel culture is bad." Y might interpret this, based on prior experience, as an attack on the overdue accountability faced by predators like Harvey Weinstein. Y might tweet, "People arguing against 'cancel culture' want immunity for sexual predators." X will interpret this as a willful misinterpretation and be nudged toward the impression that sexual assault activists are blind to the harms of mob justice. This will in turn inflame Y's sense that critiques of 'cancel culture' are directed against sexual assault survivors. All the while, those members of X's and Y's respective "camps" that actually do conform to the worst interpretations will be the most likely to chime in and comment. Thus adversarial groups can be created where they need not have existed, due purely to the nature of digital environments. 

What can be done to escape this trap? A precise vocabulary can help. Be mindful of the distinction between echo chambers, filter bubbles, and what might be called "spheres of relative concern" (as in the example above). Recall that the context you bring to a given tweet very likely diverges from the context its author imagined when writing it. Exercise caution when using popular hashtags or slogans; these often spread precisely because they invite divergent interpretations and therefore engagement. Finally, there is no substitute for the principle of charity: steelman (rather than strawman) your opponent's argument or imprecise tweet. As long as our platforms and public conversation conspire to divide us, we must resist all the more vigilantly as individuals. 

Guest User