2,229
Views
1
CrossRef citations to date
0
Altmetric
Review Article

Believing badly ain’t so bad

Pages 1208-1216 | Received 26 Mar 2022, Accepted 09 May 2022, Published online: 29 May 2022

The Covid-19 pandemic provides the newest example of staunch polarization in the epistemic community, providing ample opportunity for profound disagreements on its origin and the international response in its wake. Did it originate in a Wuhan market or was it a deliberate and malign output of the Chinese government, or a side effect of 5G towers? Do the vaccines many of us have taken offer protection against serious disease and death or are they a government ploy to cull the population or track our movements? Are facemasks at worst a small inconvenience worth bearing to protect others or an infringement of our civil liberties? The opportunity for polarization offered by the pandemic is not especially new, interesting, or surprising, following on as it does from similarly serious disagreement on, for example, climate change and the shape of the Earth.

Some of the options canvased above are examples of bad beliefs, and it is with examples like these that Neil Levy begins his excellent new book. He offers a working definition of the bad beliefs that will be his focus; they are those which are unjustified and which conflict with the beliefs held by the relevant epistemic authorities. They are also either maintained in the face of widespread public availability of evidence in favor of more accurate beliefs, or in the face of knowledge that the relevant epistemic authorities have these more accurate beliefs. Levy’s task in the book is to explain why people have bad beliefs so defined.

Now it might be thought that explanations already abound, and perhaps this will be one more journey into the land of human irrationality, with bad beliefs being captured as the outcomes of various cognitive biases and deficits which make us helplessly prone to engaging in shoddy epistemic practice. For a flavor, consider what various folk have said about belief in conspiracy theories (a label which captures many of Levy’s bad beliefs). It is commonplace in this area to observe that conspiracy beliefs are associated with a number of cognitive biases. Examples include the “intentionality bias”, which leads us to favor explanations in which intentional agents play a key role and any relevant fact or event is seen as the result of deliberate intentional actions, rather than of mere coincidence or mechanical causes (Brotherton & French, Citation2015; Douglas et al., Citation2016). Or there’s the “proportionality bias” according to which “when big things happen, we look for big causes” (Brotherton, Citation2015, p. 211) – where the causes that we tend to see as “big” are typically those involving the actions of intentional and powerful agents (Ebel-Lam et al., Citation2010; Leman & Cinnirella, Citation2007). At a more general level, conspiracy beliefs have been described as driven by the basic “causality bias” that leads us to posit meaningful causal connections between co-occurring and spuriously correlated facts and events (Van der Wal et al., Citation2018). These biases are ones which are typically understood as irrational, or at least, as otherwise useful heuristics which misfire in some contexts.

Levy’s book very much marks a departure from this kind of approach. Instead, he advocates understanding bad beliefs as the products of entirely rational processes, formed as a result of appropriately responding to evidence. Now of course, something is going wrong, these beliefs are, after all, bad ones. What is going wrong, argues Levy, is that the epistemic environment abounds with misleading evidence. Although the reasoning of bad believers is perfectly rational, the locus of the bad in the bad beliefs it outputs, are the inputs with which it works. If we want to tackle bad beliefs we ought to move away from our much loved epistemic individualism, and focus instead on improving the epistemic environment. One way of doing so is to embrace nudges, a much maligned epistemic strategy thought to impede our autonomy and bypass rational decision making processes. Part of Levy’s book is dedicated to arguing that they do no such thing. Responding to nudges is to respond to implicit testimony, and in so responding we thus give appropriate weight to higher-order evidence. In what remains I will briefly overview the book’s chapters, making some remarks along the way. To show my hand, I find the book’s main thesis pretty persuasive: I think Levy is probably right, in spirit if not necessarily in all of the details.

Chapter One (What Should We Believe About Belief?), begins by setting out why Levy cares about belief: because of the role it plays in explaining and causing behavior. On an adopted “relaxed standard”, a state counts as a belief “so long as it drives a sufficient amount of our sufficiently consequential behavior” (p. 2). Those already penning their counterexamples should note that the nature of belief is not where Levy’s interests lie, and he doesn’t take himself to have pinned down belief’s essential nature with this quick characterization (although I will say something later about why more might be needed here). Of course, a quick characterization which doesn’t pretend to have solved the what is belief? question had nevertheless better be able to withstand the most obvious putative counterexamples (inert beliefs). Later then, Levy turns to putative cases of inert belief, drawing on a range of examples from Mercier (Citation2020), and the particular case of religious belief from CitationVan Leeuwen (Citation2014, Citation2018, Citationforthcoming). He argues that such cases are not ones of causal inertness after all, and, anyway, we have not been given a reason to think that our paradigm bad beliefs (i.e., concerning climate change or vaccination) are not genuine beliefs (that is, states which drive a sufficient amount of our sufficiently consequential behavior).

Levy turns to expressive responding, that is, the phenomenon of folk reporting beliefs not because they really hold them, but rather in order to express support for a particular side. Levy considers this idea as a way of accounting for cases of mismatch between belief reports and behavior, and argues that, at the very least, expressive responding will not be able to explain some of the most consequential cases. Finally, he considers two kinds of deficit accounts which might explain bad belief: information deficits (bad believers haven’t been exposed to better information) and rationality deficits (bad believers process information badly). Although these accounts may explain some bad beliefs, Levy argues they fall short of a comprehensive explanation.

Chapter Two (Culturing Belief) opens with the idea that humans are rational animals, and suggests that it is no less accurate to understand us as cultural animals. Culture – understood as vertical (elders) or horizontal (peers) transmission of information from others – isn’t unique to humans, but cumulative culture (accumulation of knowledge across multiple generations) is. Levy argues that cultural evolution as central to human flourishing can explain some otherwise puzzling facts about us, including our long periods of dependency on caregiving, and the fact that we are overimitators, being disposed to copy even those behaviors not directed at goal pursuit. After a brief overview of the approaches to cultural evolution touted by the Californian School (Richerson, Henrich, Boyd) and the Paris School (led by Sperber), Levy identifies some agreement between them: the mechanisms to which they appeal should be seen as intelligent.

We move then to an idea that the reader may well be holding onto, even if she buys the case so far that “For much of what we know about the world, we are deeply dependent on others” (p. 50): science is a different animal. Surely this is a domain in which we should take no one’s word for it (Nullis in verba is the Royal Society’s motto). The epistemic independence of science, the idea that “science doesn’t care what you believe” printed across many a t-shirt and captioned in many a meme, strikes me as central to why it is so revered. It sits atop a podium of superior knowledge-generating methods, its epistemic goods thoroughly untainted by any contamination brought about by deference. For me, one of the best parts of Levy’s book is his dismantling of this view of the nature of science and its progress, which he takes to be profoundly mistaken. He is incredibly persuasive that scientists are equally dependent on testimony, they “use tools they didn’t develop (and that they may not be able fully to understand) often applied to data they didn’t gather and which they can’t verify, to test hypotheses that are constrained by theories they may not grasp” (p. 54). But this is not to throw shade on the business of science, or to knock its practitioners down an epistemic peg or two. Rather, “these constraints enable them to do science” (p. 54), and a worked through example of climate science to demonstrate this is convincing.

Chapter Three (How Our Minds Are Made Up), turns to distributed cognition and how we rely on social referencing and deference to form and update beliefs. Levy begins with a pessimistic view of the epistemic powers of individuals: “Alone we understand nothing” (p. 59). He argues that both scientists and ordinary folk outsource belief production and beliefs themselves, and that in general we rely on others to maintain our beliefs across a range of domains. Beliefs, thus, are shallow. Many are both optional (that is, not identity-constituting) and easily abandoned (and sometimes without our noticing). Levy has it that belief’s shallowness falls under the more general phenomenon of outsourcing cognition to the world, and that our internal representations are sparser than is often thought. He draws on experimental evidence from change blindness, cognitive dissonance, and choice blindness to demonstrate the shallowness of beliefs, how we mistake our internal representations as rich, and how we rely on the world to tell us what we believe, and to ensure the stability of those beliefs.

Let me say a little more about Levy’s discussion of choice blindness experiments, in which participants often give reasons for a choice they did not make (but are manipulated to believe that they did make). At least, that is the standard interpretation, and the one adopted by Levy. Such an interpretation clearly fits with his case for representational sparseness and our tendency to outsource to the world. But an alternative interpretation of the data is possible: the participant is right about what her choice is, and gives reasons for her choice, but simply does not realize that (due to experimental manipulation) her choice has changed (see Lopes, Citation2018 for this suggestion of an alternative interpretation and Bortolotti & Sullivan-Bissett, Citation2021 for reasons to prefer it). If that is what’s going on, choice blindness experiments cannot be so easily harnessed in a case for the sparseness of belief, and how we are easily mistaken in self-ascription. Rather, these experiments would reveal that we have dual choices, and are unaware of what causes us to change our minds. Our beliefs and choices may be perfectly rich, albeit unstable.

Levy turns to how social referencing may explain belief revision in the Never Trumpers who ended up becoming supporters after all, and the pervasiveness of outsourced belief (how many adults claim to believe the theory of evolution but have very little understanding of that theory’s commitments?). Overall, “We are all very ignorant, and we should be fine with that” (p. 80). Our ignorance with respect to how much we rely on outsourcing cognition is regrettable, leading us as it does to mistake our outsourced knowledge for that individually possessed, and take distributed cognition to be inferior.

Finally, Levy returns to religious cognition and asks what can explain behavioral inconsistency (if not Van Leeuwen’s preferred answer canvased earlier of the different nature of the underlying attitudes). Levy argues that religious representations may be outsourced to other people and features of the environment: “If I can be confident that my representations will reliably be cued when they’re needed, I don’t need to be vigilant for situations to which they’re relevant” (p. 85). In this respect then, perhaps religious attitudes look different from some other beliefs (those less contingent on context for their motivational power). Now of course, given Levy’s opening characterization of belief, all of these things get to count: they all motivate a significant amount of behavior a significant amount of the time. But I wonder whether his overall programme allows us to distinguish those beliefs (of which religious beliefs are an example) where behavioral inconsistencies might be explained by outsourcing, from more garden variety beliefs that motivate cross-contextually.

In Chapter Four (Dare to Think?), Levy sees how far we might get when we turn to “what might reasonably be taken to be individual reasoning given its best shot” (p. 88), namely, regulative epistemology, specifically, virtue epistemology. He argues that any role it might play in guiding us toward better belief is a limited one: if cultivating intellectual virtues helps us to regulate our epistemic behavior toward knowledge, it only does so in environments which are appropriately epistemically structured. We should not see virtue epistemology as an alternative to apt deference and socially distributed cognition, but rather, at best, as playing a role in making us better at such pursuits.

Levy focuses on Quassim Cassam’s work, and in particular, on the virtue of open-mindedness (set against intellectual flaccidity and dogmatism). What follows, much like we saw in Chapter Two, are further demonstrations of the epistemic paucity of individual cognition. The idea that we are obligated to tackle or rebut sophisticated climate skeptics like Rex Fleming is masterfully undermined by Levy, when he spends some time taking seriously what this would require of an individual (way too much!) (pp. 97–98). In underlining his point, he turns to implicit bias (an area in which he is an expert), and the idea that implicit biases are malleable and responsive to the efforts of motivated individuals to change them (Cassam, Citation2018, p. 173). This is a claim that Levy says he’s just not sure about (me neither!), and notes that “If I haven’t been able to answer the implicit bias question myself, I despair at my capacity to rebut sophisticated climate skeptics” (p. 99). The chapter finishes with the case of the Covid-19 pandemic, and whether this constitutes a counterexample to Levy’s proposed epistemology of apt deference. He argues that it does not.

Chapter Five (Epistemic Pollution) pairs nicely with the previous chapter, insofar as it further problematizes the idea of virtue epistemology as the antidote to bad belief, by reference to our polluted epistemic environment. Focusing on the novice-expert problem (that of identifying genuine or reliable experts), Levy argues that our environment is such that the markers we might use to come to these judgments (credentials, track record, etc.) will not help us do so, since one of the constituents of epistemic pollution is the mimicry of such markers. Add to this, for example, predatory publishers and journals, as well as internal problems of science (replication crisis, publication bias, file drawer effect), and the game of distinguishing reliable from unreliable sources is simply “too difficult for ordinary people to reasonably be expected to accomplish” (p. 117). In a lovely penultimate section of this chapter Levy addresses what some readers may have been thinking all along: sure most people will have a hard time identifying experts, but not me, for I am rather clever. Levy gives us dear readers all of this and more – we are probably well-educated, more intelligent than average, blessed with research skills lacked by the general population, and overall, more protected from epistemic pollution. But before the reader satisfies herself that this is a book about other people, Levy nicely argues that she is, in fact, no counterexample to his approach. Rather, her (albeit not total) invulnerability to epistemic pollution is because she defers. She is successful because she is embedded in particular epistemic networks. I thought that this was a neat move, if a little convenient, and it made me wonder what it would take for there to be a counterexample to Levy’s model. If the virtue epistemologists produce a case of a person cultivating her intellectual virtues against the background of shoddy epistemic networks, if she doesn’t defer but rather does the work herself, to what do we owe her success? Will Levy say that she is better characterized as falling within what his model predicts after all? What exactly do the virtue epistemologists need to produce by way of an epistemic success which couldn’t be reformulated as a win for Levy?

In the final chapter (Nudging Well), Levy argues that nudges work by providing higher-order evidence to agents, to which we respond perfectly rationally. He thus breaks away from both proponents and opponents of nudges who agree with one another that nudges threaten our autonomy and work by bypassing our capacities for rational agency. A key example for his discussion is the ballot order effect: names listed higher on a ballot are sometimes accrued a small but significant advantage. One take is that a candidate’s position on a ballot does not provide a genuine reason to favor her, and so, if we are influenced to favor her, we have our choices shaped by facts that do not constitute good reasons. Levy argues instead that nudges do not bypass rational cognition, rather, they function as higher-order evidence in the manner of implicit recommendations. With respect to the ballot order effect, Levy argues that although the order doesn’t give us reliable evidence, it does give us evidence nonetheless. Often the ordering of items implies their relative importance (e.g., on news programmes). And so a candidate being listed first functions as implicit testimony that she is better than the other candidates.

Similar things are said about how defaults function (e.g., being automatically enrolled into a pension scheme). Although often seen as taking advantage of our cognitive laziness, Levy argues that they too function as providing implicit testimony, in particular, for the claim that an option is choice worthy. One case he discusses is a study of pulmonologists being asked whether or not they would prescribe a CT scan for a patient (Aberegg et al., Citation2005). 54% ordered a scan in the control condition. But in another condition, where participants were told that a scan had been ordered already, although not performed, only 29% of them had the scan canceled. In discussion of this study, Ansher and colleagues (Citation2014) argue that a non-rational bias drives this effect, since it is clinical information, and not whether or not a scan has already been ordered, which should influence the pulmunologists’ decision. Of this case Levy argues that actually, the attitudes of the pulmunologists’ epistemic peers should of course be given weight in their decisions: these attitudes provide higher-order evidence. Nudges of various kinds then, in making certain options salient, are ways of giving agents implicit testimony, by which it is rational to be guided.

In the clinical case, I think Levy is absolutely right: that a scan had been ordered by an epistemic peer is, in fact, absolutely relevant to one’s decision, since it functions as higher-order evidence regarding the status of the first-order evidence (clinical need). The higher-order evidence here functions in much the same way as in the “by now hackneyed” restaurant bill case with which Levy opens his discussion of higher-order evidence (p. 136). Here I become confident that each diner owes a given amount, and my friend becomes confident that each diner owes a different amount. Higher-order evidence from epistemic peers just is evidence regarding the first-order evidence, which is of course relevant to one’s deliberation. Giving it its due in one’s deliberation just is to respond to the evidence rationally.

However, it is less clear to me that cases like these, where peer disagreement over a math problem, or peer recommendations for a particular course of action, are analogous to, for example, the ballot order effect. In the ballot order effect case, we’re applying an otherwise rational strategy in a context where it has no business being applied. The actual evidential value of a ballot position is very low, and is merely mimicking a case where it would be high (i.e., the order of items on a news programme). The legitimacy of the higher-order evidence in the restaurant and the clinic presumably comes in part from the status of the testifiers as epistemic peers, a status we are aware of and which makes a difference to the weight we give to the higher-order evidence. Indeed, I would say this is part of the story of why giving such weight is rational. But in cases like ballot order, where, according to Levy, we understand the order of candidates implicitly as encoding testimony, on what grounds do we take the recommendation seriously? This may well be a difference in degree rather than kind, but the cases harnessed to demonstrate our rational dependency on higher-order evidence do look relevantly different from at least some cases of nudging.

In a short concluding chapter Levy notes that apparent failures to rely on individual cognition and first-order evidence are not the deviations from rationality that we often think. Rather, “They indicate a rational outsourcing of our cognition, a reliance on the division of epistemic labor, and the appropriate use of higher-order evidence” (p. 150).

All told, this is an excellent book. I find Levy persuasive on the main claims, and conceptualizing human epistemic life as one involving significant amounts of rational deference and appropriately responding to higher-order evidence is a welcome new approach to the nature of bad believing. Thinking of bad believers as rationally responding to a polluted epistemic environment, rather than as unfortunate victims of individual irrationality, calls for a cleaning up of the epistemic world, rather than an othering or pathologizing of bad believers favored by epistemic individualists. I’ll resist the temptation to end this review with a quick note concerning for whom the book is “essential reading”, since it seems to me that that list might be long (who wouldn’t be interested in a fresh take on the nature of our epistemic lives?). Instead I’ll just say that I thought it was superb and I enthusiastically recommend it.

Acknowledgments

With thanks to the Arts and Humanities Research Council for funding the research of which this piece is a part (Deluded by Experience, grant no. AH/T013486/1), and the British Academy for funding the project of which this piece is a natural part (Conspiratorial Ideation and Pathological Belief, grant no. SRG21\210992). Many thanks also to Anna Ichino for comments on an earlier version of this piece.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Arts and Humanities Research Council [AH/T013486/1]; British Academy [SRG21\210992].

References

  • Aberegg, S. K., Haponik, E. F., & Terry, P. B. (2005). Omission bias and decision making in pulmonary and critical care medicine. Chest, 128(3), 1497–1505. https://doi.org/10.1378/chest.128.3.1497
  • Ansher, C., Arierly, D., Nagler, A., Rudd, M., Schwartz, J., & Shah, A. (2014). Better medicine by default. Medical Decision Making: An International Journal of the Society for Medical Decision Making, 34(2), 147–158. https://doi.org/10.1177/0272989X13507339
  • Bortolotti, L., & Sullivan-Bissett, E. (2021). Is choice blindness a case of self-ignorance? Synthese, 198(6), 5437–5454. https://doi.org/10.1007/s11229-019-02414-3
  • Brotherton, R. (2015). Suspicious minds: Why we believe conspiracy theories. Bloomsbury.
  • Brotherton, R., & French, C. C. (2015). Intention seekers: Conspiracist ideation and biased attributions of intentionality. PLoS ONE, 10(5), 1–14. https://doi.org/10.1371/journal.pone.0124125
  • Cassam, Q. (2018). Vices of the mind: From the intellectual to the political. Oxford University Press.
  • Douglas, K. M., Sutton, R. M., Callan, M. J., Dawtry, R. J., & Harvey, A. J. (2016). Someone is pulling the strings: Hypersensitive agency detection and belief in conspiracy theories. Thinking & Reasoning, 22(1), 57–77. https://doi.org/10.1080/13546783.2015.1051586
  • Ebel-Lam, A., Fabrigar, R., MacDonald, T., & Jones, S. (2010). Balancing causes and consequences: The magnitude-matching principle in explanations for complex social events. Basic and Applied Social Psychology, 32(4), 348–359. https://doi.org/10.1080/01973533.2010.519245
  • Leman, P., & Cinnirella, M. (2007). A major event has a major cause: Evidence for the role of heuristics in reasoning about conspiracy theories. Social Psychology Review, 9(2), 18–28.
  • Lopes, D. (2018). Feckless Reason. In G. Currie, M. Kieran, & A. Meskin (Eds.), Aesthetics and the sciences of mind (pp. 21–36). Oxford University Press.
  • Mercier, H. (2020). Not born yesterday: The science of who we trust and what we believe. Princeton University Press.
  • van der Wal, R. C., Sutton, R. M., Lange, J., & Braga, J. P. N. (2018). Suspicious binds: Conspiracy thinking and tenuous perceptions of causal connections between co-occurring and spuriously correlated events. European Journal of Social Psychology, 48(7), 970–989. https://doi.org/10.1002/ejsp.2507
  • Van Leeuwen, N. (2014). Religious credence is not factual belief. Cognition, 133(3), 698–715. https://doi.org/10.1016/j.cognition.2014.08.015
  • Van Leeuwen, N. (2018). The factual belief fallacy. Contemporary Pragmatism, 15(3), 319–343. https://doi.org/10.1163/18758185-01503004
  • Van Leeuwen, N. (forthcoming). Imagination, Belief, and Religious Credence. Harvard University Press.