What Truths Break Through: How Do You Know?


https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee1051e1 c681 453b 957e

Are citizens of modern, liberal democracies—democratic societies that encourage a thriving public sphere unconstrained by top-down informational control and censorship—capable of forming accurate, informed political opinions? In the last post, I explored four reasons for pessimism: complexity, invisibility, rational ignorance, and politically motivated cognition. In this post, I will move away from individual-level biases and constraints and focus on the broader social conditions that structure media and the public sphere within open societies.

It is a popular idea that the “truth” will emerge from a free and open marketplace of ideas. Although appealing in some ways, this optimism is naive.

I mentioned some of the problems it confronts in the last post.

For example, in contrast with most consumer goods and services, it is unclear how ordinary citizens can reliably evaluate the ideas circulating within the public sphere. Figuring out the truth about complex political issues is highly challenging. If people are selling untruths—simplistic, unfounded, or inaccurate ideas—how would ordinary citizens know?

It is also unclear why most citizens would be motivated to acquire political information, let alone engage in time-intensive political debate, in a system—democracy—where individual voters have almost no say in collective decisions.

In this post, I will explore another issue, one which I have written about elsewhere, both in published academic research and on this blog: In practice, the “marketplace of ideas” within liberal democracies often functions like a marketplace of rationalisations, an informational economy in which pundits, journalists, intellectuals, and media outlets compete to produce justifications of the claims, narratives, and decisions favoured by different political and cultural tribes in society.

First, however, it is helpful to consider a very different—and more influential—critique of the optimistic assumptions associated with the concept of a marketplace of ideas.

As I have noted repeatedly on this blog, one of the dominant ways journalists, social scientists, and politicians today attempt to understand epistemic problems in society involves the concept of “misinformation” (or “disinformation”, often defined as “intentional misinformation”).

According to this narrative—what I will call the “misinformation narrative”—Western democracies have recently experienced an explosion of misinformation connected primarily to social media, right-wing populist politicians, and foreign (e.g., Russian) influence campaigns. As a result, many citizens are embracing misperceptions—for example, conspiracy theories, election denial, anti-science beliefs, and so on—which are leading them to make bad decisions, such as voting for demagogues, attacking democratic institutions, and rejecting public health advice.

Although this narrative is very influential—in some parts of academia and media, it is almost hegemonic—some contrarian researchers (myself included) have criticised it. Their counter-narrative typically rests on three core ideas.

First, whereas the misinformation narrative assumes that people are gullible and easily swayed by what they read or see, the counter-narrative observes that people are epistemically vigilantif anything, too pig-headed than too credulous. Even from a young age, humans are sceptical social learners, evaluating the plausibility of messages and the trustworthiness of messengers in sophisticated—and often suspicious—ways. For this reason, most attempts at mass belief manipulation fail, even when they are extraordinarily well-funded and intensive.

Given this, it seems unlikely that people are changing their attitudes dramatically en masse in response to strange fake news stories and unfounded conspiracy theories on social media.

Second, proponents of the counter-narrative point out that misinformation is relatively rare in Western democracies. That is, at least if researchers restrict their focus to very clear-cut misinformation such as fake news (in the technical sense of completely fabricated news stories), most people encounter very little of it in their media diet. Partly, this is because most people—rationally, given how little their vote influences policies—do not pay much attention to “current affairs”. However, it is also because people who do pay attention overwhelmingly tune into mainstream news, which rarely publishes fake news or low-quality misinformation.

Finally, proponents of the counter-narrative point out that misinformation mostly preaches to the choir. That is, against a popular image of otherwise ordinary people stumbling across a TikTok video, falling down a rabbit hole, and ending up a QAnon believer, research suggests that misinformation caters to a non-random segment of the population characterised by traits such as highly conspiratorial worldviews, low trust (often active distrust) of institutions, and intense partisan animosity (i.e., deep hatred of ideological enemies).

Because of this, it seems unlikely that misinformation is changing people’s behaviours much. Instead, the situation seems to be that people with specific pre-existing worldviews and intentions seek out congenial information, including misinformation. That is, misinformation appears to be symptomatic of deeper societal problems than an exogenous driver of them.

As Sacha Altay and colleagues put it, “The influence of misinformation on people’s behaviour is overblown as misinformation often “preaches to the choir”.”

Given these three ideas, proponents of the counter-narrative often claim that modern alarmism about misinformation amounts to a kind of “moral panic”, a sort of liberal-establishment hysteria completely unsupported by scientific evidence concerning misinformation’s prevalence and harms.

I have previously endorsed a version of this counter-narrative. Nevertheless, it confronts an obvious objection: It seems to imply an implausibly optimistic analysis of media, political debate, and punditry, much of which involves content and coverage that seems, well, terrible, at least relative to an ideal of thoughtful, balanced, objective, rational, fair-minded analysis and communication.

To quote Philip Tetlock, whose own scientific research has exposed how unreliable and overconfident even much “expert” commentary is,

“When the audience of 2515 looks back on the audience of 2015, their level of contempt for how we go about judging political debate will be roughly comparable to the level of contempt we have for the 1692 Salem witch trials.”

Although perhaps too optimistic about the future—I am sceptical things will be much better in 500 years—this harsh assessment of the present seems right. Even if clear-cut misinformation is relatively rare in the media ecosystem (in Western democracies, at least), it seems undeniable that extremely low-quality—biased, selective, cherry-picked, unreliable, and so on—information is widespread.

More carefully, the worry with the counter-narrative can be stated as follows.

First, even if fake news and other extremely low-quality misinformation are pretty rare and non-impactful in Western democracies, it seems indisputable that there is a vast amount of highly misleading (e.g., selective, cherry-picked, biased, etc.) content in political speech, punditry, and media coverage.

Second, even if people are not gullible and misinformation often preaches to the choir, epistemic vigilance is not magic—short of simply ignoring all information, there is no way of being perfectly protected against accepting false or misleading content—and there is evidence that selective, skewed, and biased reporting shifts people’s attitudes and behaviours.

Given this, what is needed, I think, is a theoretical framework that can reconcile four ideas:

  1. People are epistemically vigilant.

  2. Clear-cut misinformation (e.g., fake news) is relatively rare.

  3. Misinformation often preaches to the choir.

  4. Misleading information is much more widespread and impactful.

In my view, the concept of a marketplace of rationalisations provides an illuminating way of integrating these facts.

I will first detail how to understand such a marketplace. I will then explain why rationalisations rarely take the form of clear-cut misinformation and why they can be genuinely impactful—and harmful—even though, in some sense, they merely respond to “audience demand.”

The concept of a marketplace of rationalisations builds on four simple ideas.

First, people—elites, activists, partisans, passionate voters, and so on—are often motivated to make or endorse controversial claims and decisions. This includes:

  • The policies and actions of their favourite political parties, movements, and leaders.

  • Self-serving and alliance-serving narratives. (Lots of political discourse involves a variation on the following theme: “Me and my allies are rational, benevolent, and just, and hence deserve power, status, and influence; our rivals and enemies are an irrational and vicious threat to society.”).

  • Whatever beliefs and ideologies are identity-defining for specific political and cultural tribes that function as a source of connection and status for people;

  • Strong but stigmatised intuitions (e.g., conspiratorial and vaccine-sceptical intuitions). 

Second, when people make or endorse controversial claims or decisions—for example, claims or decisions that are contested, harm other people’s interests, or violate norms—they are strongly motivated to share reasons, i.e., evidence or arguments that they can use to justify themselves to others and win social support. In general, much of human reasoning—the generation and evaluation of reasons—is bound up with social processes of persuasion, argument, and reputation management. Given this, motivations to endorse controversial claims or decisions create a demand for rationalisations.

In ordinary social life, the task of producing rationalisations typically falls on the individuals making the relevant decisions. If I violate a norm, the task of attempting to justify my norm violation—to convince others that it was not really a violation, for example, or that it was a violation but one justified by circumstances—falls on me. However, in some circumstances, people outsource the task of producing rationalisations to others.

The most familiar examples are professions such as lawyers, press secretaries, and public relations teams. For example, the job of a presidential press secretary is not—ultimately—to figure out or communicate the truth about the world. It is to take a predetermined conclusion—“the president’s actions are wise, rational, and just”—and select, frame, and organise information in ways conducive to rationalising it.

In return for their cognitive labour, they are handsomely rewarded.

This brings me to the third point: A very similar phenomenon arises in the media and among political pundits, commentators, and professional opinion-givers, albeit in ways that are much more bottom-up, informal, and subject to self-deception.

Generally, when people provide us with information relevant to our goals and interests, we reward them. Just as we dislike and disapprove of those who spread deceptive or unreliable information, we like, admire, and feel grateful towards good sources of information. Indeed, some speculate that a distinctive form of social status in our species—prestige—evolved through a process in which people “pay” good sources of information with admiration, deference, and respect in exchange for access to the information they can provide. Whether or not that is true, a complex system of social rewards and punishments—norms, reputations, social approval/disapproval, and status games—regulates communication and the division of cognitive labour in human social life.

This is equally true of rationalisations. Just as we are grateful for reliable information that helps us achieve our goals, we like, admire, and defer to those who provide us with evidence and arguments we can use to justify our decisions, protect our reputation, and mobilise social support.

Of course, in most cases, the costs of producing rationalisations for others outweigh these social rewards. This is why the task of producing rationalisations typically falls on the individuals seeking to justify their claims or decisions, except in the case of elites with enough power, wealth, and influence to compensate others for providing them with helpful, self-serving justifications.

However, things are very different in democratic politics. In this context, motivations to endorse specific claims and decisions are widespread among members of society’s various political and cultural tribes. Consequently, the social rewards—the potential profits—for producing content that rationalises those claims and decisions are much greater. Given this, those who become trusted sources of high-quality intellectual ammunition for society’s large-scale political or cultural communities can win extraordinary attention, status, and financial benefits.

This leads to the fourth and final idea: This process in which people—pundits, commentators, intellectuals, media outlets, and so on—compete to win social and financial rewards by functioning as de facto lawyers and press secretaries for different tribes in society can be understood as a kind of market. Indeed, it is a marketplace of ideas, except rather than audiences behaving as disinterested truth-seekers, they shop around for high-quality rationalisations of the actions, policies, and narratives they support.

More specifically, this market involves:

  1. Supply and demand. Media outlets, influencers, pundits, intellectuals, and so on create and disseminate information that caters to audience demand for rationalisations of specific actions, policies, and narratives.

  2. Competition. Just as firms compete by offering better products or lowering prices, rationalisation producers compete by offering more appealing and persuasive evidence and arguments. The freer the marketplace of ideas, the more intense this competition is, as consumers instinctively gravitate towards those sources that best validate and justify their preferred decisions and narratives.

  3. Specialisation. Rationalisation producers devote their time, energy, and ingenuity to where they earn the greatest rewards, generating specialisation as producers target unmet demand.

  4. Medium of exchange. Although money often changes hands in rationalisation markets—think of paying for a subscription to a newspaper or Substack—the more basic medium of exchange involves social rewards such as attention, credibility, and status. (Of course, these social rewards can often be used to make money—for example, by selling access to one’s audience to advertisers).

This is all highly abstract. The simplest concrete example of these dynamics involves partisan media.

Partisans generally have a strong demand for information—evidence and arguments—that rationalises the actions, policies, and narratives favoured by their political party. At the individual level, this drives politically motivated cognition. However, it also generates a widespread demand for partisan rationalisations, as “consumers tend to choose media whose biases match their own preferences or prior beliefs”, with this “tendency to select news based on anticipated agreement… [is] strengthened among more politically engaged partisans.”

Although some people depict this demand as a desire for truth from trusted sources, some research suggests that partisans do not reduce their demand for biased content even when they know it is biased as long as it supports their political side.

Responding to this demand, partisan media outlets (e.g., Fox News, GB News, MSNBC, etc.) and pundits (e.g., Tucker Carlson, Rachel Maddow) compete to select, create, and frame evidence and arguments that appeal to partisan audiences. That is, “‘[D]ifferent media outlets indeed select, discuss, and present facts differently, and they do so in ways that tend to systematically favour one side of the political spectrum or the other.”

This dynamic plays out in numerous other contexts.

For example, fringe, extremist, and conspiratorial subcultures often create a prestige economy in which people can become minor celebrities by churning out content that satisfies identity-defining narratives.

Moreover, military conflict of all kinds—think Israel/Palestine or Russia/Ukraine—inevitably gives rise to a lucrative status game in which intellectuals, pundits, journalists, and media outlets can achieve considerable success by affirming and rationalising the competing sides’ preferred interpretations of history and current events.

In summary,

  • Rationalisation markets cater to the demand among society’s influential political and cultural tribes for justifications of their favoured narratives and decisions.

  • These markets are predictable based on general features of human psychology and sociality.

  • They illuminate general patterns observed in media, punditry, and political debate.

At the most abstract level, a “rationalisation” is any information that can justify predetermined claims or actions.

Consider the “Big Lie”, for example. Trump and Trump supporters were motivated to challenge the legitimacy of the 2020 presidential election and overturn the result. This created an intense demand for rationalisations—for evidence and arguments that Trumpists can use to justify this biased, self-serving narrative. In response, a thriving media ecosystem has emerged, disseminating “evidence” of voter fraud, testimony from trusted sources casting doubt on the election, and subtle statistical analyses of voting patterns designed to demonstrate voter fraud.

For these reasons, rationalisations tend to be misleading. Their function is not to inform people of the truth but to justify conclusions favoured for reasons independent of their truth. Given this, rationalising information is often cherry-picked, framed, packaged, and organised in deceptive ways.

However, it rarely takes the form of misinformation.

Admittedly, it is very unclear what the term “misinformation” is supposed to refer to. Nevertheless, most misinformation research focuses on clear-cut, low-quality misinformation, such as fake news. It then estimates the overall prevalence of misinformation in the media ecosystem by calculating people’s exposure to low-quality websites that publish such content.

So understood, rationalisation and misinformation are distinct concepts, and we should expect the most effective rationalisation producers to refrain from publishing outright misinformation.

There are two reasons for this.

First, rationalisations ultimately serve social functions of persuasion, argument, recruitment, and reputation management. Because misinformation is generally unpersuasive and easily discredited, it does not serve these functions well.

Second, people will shop around for credible, trustworthy rationalisation producers. To the extent that pundits, journalists, or media outlets acquire a reputation as fake news producers, this will undermine the value of their output.

The analogy with lawyers and press secretaries helps here. Lawyers do not just make things up. Instead, they are highly skilled at selecting, framing, packaging, and organising accurate information in extremely biased ways. The same applies to media outlets and commentators that perform a rationalising function. When it comes to partisan media, for example, there is a consensus that:

“All the [biased] accounts are based on the same set of underlying facts. Yet by selective omission, choice of words, and varying credibility ascribed to the primary source, each conveys a radically different impression of what actually happened.”

Nevertheless, does any of this matter? Rationalising information preaches to the choir. It functions solely to justify claims, narratives, or decisions people are already motivated to make or endorse. Is it, therefore, harmless?

This is a big topic—and this post is already unconscionably long—but I will end by briefly reflecting on why the information circulating within rationalisation markets is both a response to audience demand and a source of misperception and bad societal decision-making.

These harms—or at least those I will focus on here—ultimately have their roots in an essential feature of rationalisations: Unlike absurd fake news stories and bizarre conspiracy theories, they are designed to be genuinely persuasive.

Humans are strongly motivated to produce reasons to justify their beliefs and decisions. In this sense, much of reasoning is post hoc rationalising. It is lawyerly. Nevertheless, as Hugo Mercier and Dan Sperber point out, just as you can employ a skilled defence lawyer to defend your innocence, you can also employ one in an advisory role, letting you know when specific actions would be indefensible. In that case, the inability to rationalise a decision is a strong reason not to make it.

Something similar is true of human psychology, generally. When claims or decisions cannot be rationalised, we are typically motivated to avoid or abandon them unless we are so powerful we do not depend on social approval and support. This fact—that human judgement and decision making is generally subject to a powerful rationalisation constraint—limits the excesses of bias. For example, although many people are biased towards self-aggrandising beliefs, self-aggrandisement typically has strong limits: people believe they are better than they are, but only a bit better—only as much as they can subjectively rationalise.

One problem with rationalisation markets is that they weaken this constraint, which prevents people from abandoning unfounded beliefs and intuitions. That is, even when they do not cause people to embrace radically different kinds of beliefs or decisions, the steady stream of high-quality rationalisations they provide can entrench people in biased conclusions they would otherwise be forced to abandon.

Another problem with rationalisation markets is that sustained exposure to one-sided justifications can and does lead people to adopt increasingly extreme beliefs.

Research on group polarisation has revealed that groups of like-minded people tend to “go to extremes”. For example, a group of somewhat pro-choice liberals are likely to become much more pro-choice after a period of deliberating with each other. Although one mechanism through which this occurs is reputational, another—more mundane—mechanism is simply that people within like-minded groups are exposed to a highly skewed sample of evidence and arguments.

Something similar is true of rationalisation markets: sustained exposure to the rationalising information they produce can push people towards more extreme versions of their favoured beliefs and narratives.

As Hugo Mercier has observed of the polarising effects of biased media,

“Polarization does not stem from people being ready to accept bad justifications for views they already hold but from being exposed to too many good (enough) justifications for these views, leading them to develop stronger or more confident views.”

Finally, rationalisation markets can have insidious effects when the rationalising content traded within them influences people outside the original consumer base.

For example, people who acquire justifications from their favoured pundits and media outlets can then use those justifications in conversations and arguments with others. (Indeed, this utility constitutes the ultimate function of rationalisations). Alternatively, once a thriving media ecosystem has emerged in response to audience demand for rationalising information of a certain kind, it is often easy for people outside the original audience to become exposed to its content.

In general, this is my tentative explanation of why someone like Rupert Murdoch seems to have had a considerable—and insidious—political impact: Despite his image as a master manipulator of public opinion, his talent consists primarily of building media outlets that are highly effective at catering to consumer demand—to audiences’ biases, preferences, and sensibilities. However, once such thriving media outlets are up and running, it is easy for people outside the original audience base to be exposed to—and persuaded by—their skewed content and coverage.

This concludes my two implausibly long posts exploring the epistemic challenges of open societies. Whereas the first painted a bleak picture of individual-level constraints and biases, this post challenges the optimistic idea that unleashing biased individuals within a free and competitive marketplace of ideas will automatically generate truth and enlightenment. Instead, the real-world marketplace of ideas simply responds to—and partly exacerbates—people’s biases.

Of course, so far, I have focused exclusively on the epistemic challenges open societies confront. Although this pessimistic focus seems to capture the quality—or conspicuous lack of quality—of the public sphere in many liberal democracies, it would also be fair to point out that this pessimistic analysis is one-sided.

If bias, distortion, and illusion are so pervasive within open societies, what explains why many people—perhaps not enough, but many nonetheless—seem quite reasonable and well-informed? And why are open societies, for all their faults, generally superior—socially, politically, and indeed epistemically—to closed societies of various kinds?

In a future post, I will return to these questions, focusing on the positive features of open societies. I will also explore how we (those of us who champion the ideals of such societies) might address some of the epistemic challenges I have identified in these two posts.



Source link

About The Author

Scroll to Top