888
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Reality check: can impartial umpires solve the problem of political self-deception?

ORCID Icon
Pages 16-25 | Received 09 Jun 2020, Accepted 12 Oct 2020, Published online: 10 Dec 2020

ABSTRACT

What can one say to the self-deceived? And – perhaps more importantly – who can say it? The attribution of self-deception depends heavily on the criteria for what is thought to be beyond dispute. For Galeotti, misperception of reality is a product of psychological and emotional pressure resulting in ‘emotionally overloaded wishes’, and her solution thus involves the construction of what an ‘impartial’ and ‘dispassionate’ observer would conclude when presented with the same evidence. Drawing on her examples of foreign policy decision-making, I discuss two objections. First, I ask whether being ‘dispassionate’ is enough get one off the hook from the sorts of value judgements that must be made in assessing evidence in complex situations. Second, I address the role of disagreement and dissent, and suggest that what is required are not actors with a lack of emotionally overloaded wishes, but actors with different goals and wishes. Thus, while Galeotti emphasizes solutions drawing on ideals of impartiality, we might more productively look for solutions that engage multiple forms of partiality.

Introduction

What can one say to the self-deceived?Footnote1 And – perhaps as importantly – who can say it? Self-deception operates at the level of our inquiries into and beliefs about the world. In its general outline it involves a ‘misperception of reality, under various sources of psychological and emotional pressure, […] driven by the desire to believe what one wishes to be the case’ (Galeotti Citation2018, 3). The self-deceived are of course not aware that their ‘emotionally loaded wishes’ are biasing their inquiries and assessment of evidence. It thus seems clear enough that simply presenting the self-deceived with counter-evidence is not going to do the trick: they have already cultivated a repertoire of ways of downplaying or discrediting awkward facts. In the political context we find new layers of difficulty, not least that access to privileged information means that outsiders will be poorly placed to offer credible counter-evidence and arguments to those in power. This was noted by Daniel Ellsberg in a comment to Henry Kissenger in 1968. ‘You will feel like a fool for having studied, written, talked about these subjects, criticized and analyzed decisions made by presidents for years without having known of the existence of all this information, which presidents and others had and you didn’t, and which must have influenced their decisions in ways you couldn’t even guess.’ With access to such privileged information ‘it will have become very hard for you to learn from anybody who doesn’t have these clearances. Because you’ll be thinking as you listen to them: “What would this man be telling me if he knew what I know? Would he be giving me the same advice, or would it totally change his predictions and recommendations?” And that mental exercise is so torturous that after a while you give it up and just stop listening … ’ (Ellsberg Citation2002, 238). The problem Ellsberg is talking about – the inability to learn from others who do not have access to the same information – is intimately related to the question of what we can say to the self-deceived, for in both cases the key problem is a loss of capacity to learn from others. This is a difficult enough problem in the case of individual self-deception. But in the political context it becomes intertwined with the complexities of group deliberation and decision, institutional cultures, and dynamics of power. And to this we can add the pressures of time, uncertainty, and the magnitude of consequences.

Many philosophers would be tempted to strip out these complexities in order to isolate the mechanisms of interest through simple, stylized examples. It is to Galeotti’s great credit that she does not do this. Rather, she directly tackles the complexity of political self-deception, diving into the histories and biographical recollections of the actors involved in the Bay of Pigs blunder, the Gulf of Tonkin incident, and the phantom weapons of mass destruction that American and British leaders imagined to be in Iraq. To these messy historical cases she applies an original account of self-deception, which aims to steer between the common tendency to either overstate individual agency – to look for the lies and the lying liars who told them – or to understate agency by dissolving it into an anonymous system or process, so that it seems moral responsibility can be entirely evaded. She addresses the material of the historian with the analytic tools and normative commitment of the political philosopher, and aims to identify the appropriate location of moral agency in complex and obscure collective processes. I am sympathetic to this project and to this approach to political epistemology. In this short paper, however, I will focus on her proposed solution. In particular, I will develop two objections to her idea of introducing ‘impartial’ and ‘dispassionate’ observers to check the process of self-deception. First, I ask whether being ‘dispassionate’ is enough get one off the hook from the sorts of value judgements that must be made in assessing evidence in complex situations. Second, I address the role of disagreement and dissent in collective deliberation and decision, and suggest that what is required are not actors with a lack of emotionally overloaded wishes, but actors with diverse goals and wishes. From this I develop more positive proposals for engaging multiple forms of partiality, and suggest a surprising role for individual or localized self-deception in checking against collective self-deception. In order to lay the ground for this discussion, I will begin by analysing an aspect of her theory of self-deception that is curiously underplayed in the book: collective self-deception.

Collective self-deception

The philosophical literature on self-deception focuses almost entirely on individuals in neatly stylized situations: the ‘jealous husband’ case, for instance, where the husband’s emotionally loaded wish for a happy marriage leads him to downplay evidence of spousal infidelity, or the case of a person with cancer deceiving themselves about their prognosis. This literature is built on the paradox of individual self-deception: How can an agent deceive themselves? Who is the agent doing the deceiving, and who is the deceived, when they are the same person? As Sissela Bok once put it, ‘I deceive myself’ does not stand in a relationship to ‘I deceive you’ in the way that ‘I blame myself’ stands in relation to ‘I blame you’. ‘Rather, it often stands, like Münchhausen’s claim: “I can lift myself by the hair” stands in relation to “I can lift you by the hair”’ (Bok Citation1980, 926). Various philosophers have tried to resolve this tension by invoking some sort of internal division, such that one part of the self could be deceiving another. Others have attempted to sidestep the problem by treating self-deception as a product of unconscious or ‘cold’ biases. Galeotti carefully steers between these ‘intentionalist’ and ‘causal’ approaches to self-deception by means of an ‘invisible hand’ explanation, in which self-deception is the product of the temporally extended internal deliberations of an individual agent, enabling a measure of moral responsibility to be attributed to the agent even while recognizing the role of biases within the process of becoming self-deceived. This conceptual discussion anchors the first third of the book, but her central concern is with something slightly different: political self-deception.

Political self-deception is most centrally a form of collective self-deception. While we might think of individual office-holders as being self-deceived with respect to particular questions, the paradigm cases of political self-deception relate to groups. Thus, we (and Galeotti) naturally speak of ‘the Bush administration’ being self-deceived on the question of the presence of WMD in Iraq, of Kennedy’s national security council being self-deceived with respect to the Bay of Pigs blunder, and so on. However, the paradox of self-deception – the problem to which all the philosophical acrobatics in the opening chapters are the solution – seems largely to disappear when we turn to collective self-deception. It might well seem implausible on its face to say that an individual agent both believes P and not-P, but it does not seem nearly so implausible to say the same for groups. This is because we recognize at the outset that groups are composed of individuals with different beliefs, motivations, emotionally loaded wishes, and so on. It is not at all obvious that groups can be self-deceived in quite the way that individuals are, and the shift from the individual agent – the jealous husband – to the collective agent – the Bush administration – is one that takes some explaining.

One approach is to construe collective self-deception as the sum of individual self-deceptions. Those individual self-deceptions might themselves be the product of group dynamics and interactions. That is, the group context might explain why the individual has an emotionally loaded wish that P be true. But it is still the individual who goes through the deficient belief formation process described by self-deception. Galeotti suggests this approach when she initially turns to the group context. The social and political context is ‘much more complex’ because ‘the context, from which SD stems, is complicated and includes wrong assumptions, mixed intentions, ideology and blurred data’ (Galeotti Citation2018, 48). Here she is framing political institutions and group deliberations as ‘context’, the environment in which we might then try to pick out the individual self-deception. Indeed, she presents the value of her theory in terms of picking out individual self-deception in a noisy social context (Galeotti Citation2018, 48). She also suggests that individual self-deception can play a crucial role within group dynamics at pivotal moments (such as covering up blunders, or explaining the self-silencing of critics). But this does not add up to collective self-deception until all (or most) are (individually) in that state.

Galeotti seems to favour this summative or methodologically individualist approach to analysing collective self-deception. But she also invokes a stronger version of collective self-deception, in which ‘[t]he group, as a collective subject’ (Galeotti Citation2018, 102; my emphasis) constitutes the agent that is self-deceived. It is the group here that ‘has the wish that P’ (Galeotti Citation2018, 102). The sense in which the group wishes that P, she says, is that ‘the political career of the leader and of whole group depends on P’ (Galeotti Citation2018, 102). Some advisers – eager to please – may begin a biased search for evidence for some plan Q, which fits with the wish that P. Plan Q is well-received by the leader, and this positive reception is taken as independent positive evidence for Q by the proponent and other members of the team: ‘The deceptive conviction that Q is good is thus reinforced by the emerging consent, which is … induced by the tendency to please’ (Galeotti Citation2018, 102). Recalcitrant group members are cowed into silence by the perceived consensus of the group, or even persuaded to give up their reservations. Thus, she concludes, ‘[t]he different steps to self-deception … are taken by different persons in the group. At the end, each is convinced by the conviction of the others and by their motivation, the whole process remains opaque, and the deceptive belief is collectively shared’ (Galeotti Citation2018, 103). On this view, no individual needs to go through the entire self-deception process herself (after all, she notes, conformity pressure does a lot of the work); that process can be divided up and distributed among different members of the group. An example here is the alleged uranium deal between Iraq and Niger, in which ‘motivation, negative evidence, and biased search for confirmation jointly led to the false confirmation that WMD were present in Iraq’ (Galeotti Citation2018, 49; my emphasis). The three elements of self-deception – the motivation to undertake some policy action, the anxiety aroused by negative evidence, and the search for information confirming the feasibility of the action (Galeotti Citation2018, 48) – can thus be manifest in different people at different times in a collective process.

Yet this account of collective self-deception is somewhat ambivalent. When she talks of different steps taken by different persons it sounds like she is developing a strong or emergentist account of collective self-deception: no particular individual agent, it seems, goes all the way through the process of self-deception, but the group, in some sense, does. Yet when she talks of the false belief being ‘collectively shared’ she seems to mean simply that it comes to be held by all or most members of the group. The complex interactive dynamics are presented as a means to the self-deception of all or most of the group as individuals. I suspect that her primary concern is with individual self-deception, but, whether or not that is the case, she does not clearly distinguish the summative and the emergentist views of collective self-deception. In her account they are simply run together, so that she can conclude both that different agents played different roles in the production of self-deception at the level of the group, and that such self-deception was manifest in all or most of the individuals in the group themselves being self-deceived.

The conflation of summative and emergent views of collective self-deception is important for at least two reasons. First, it masks one of the interesting features of her account of collective self-deception: that groups can become self-deceived without all the particular individuals within those groups being self-deceived. Indeed, on one reading of her account, no individual need go through each particular step in the process of self-deception. What is necessary is that the group as a whole goes through the process of wanting some policy, encountering negative evidence, and engaging in a biased search for information. Second, and more generally, this conflation underplays and leaves unexplored the ways in which individual self-deception can relate to collective self-deception. She presents them as interacting in complex ways but ending up with all of most of the individuals in the group being self-deceived, such that individual and collective self-deception push in the same direction. Yet it is possible – or so I will suggest below – that individual self-deception can play a role, in some circumstances, in resisting collective self-deception.

Resisting collective self-deception through impartial umpires

Self-deception is perhaps most commonly invoked to explain the failure of others to see what seems obvious to us. The attribution of self-deception, as Sissela Bok observes, depends heavily on ‘the criteria for what is thought indisputable’ (Bok Citation1980, 926). Jean Bodin, she notes, thought the reality of witchcraft was indisputable, and that those who didn’t see it were deceiving themselves. Petrarch’s Augustine thought the same for those who persisted in pursuing earthly passions rather than contemplating the sinfulness of their soul. Galeotti’s account of self-deception locates the criteria for what is thought indisputable in the counterfactual construction of what an ‘impartial’ and ‘dispassionate’ observer would conclude when presented with the same body of empirical evidence. She thus opens her book by describing self-deception as a ‘misperception of reality, under various sources of psychological and emotional pressure, […] driven by the desire to believe what one wishes to be the case, even if a dispassionate review of the available data would lead any impartial observer to the opposite conclusion’ (Galeotti Citation2018, 3; my italic).

Such a dispassionate review by an impartial observer not only tells us when a person or group is self-deceived. It also – Galeotti suggests – provides the means to get them to snap out of it. She thus proposes to counter self-deception through the institution of a personally and cognitively independent umpire or referee. Such an observer – one who is, crucially, detached from the emotional pressures of the group – would be able to provide a reality check. She suggests that groups should pre-commit to being checked in real-time by ‘dispassionate’ observers, and also that retrospective accountability to impartial umpires would serve to discipline their reasoning and check against the risk of self-deception. Like Ulysses and the Sirens, wise agents would recognize that their emotionally loaded wishes might ‘manipulate[] attention and relax[] usual standards of reasoning; [making] the subject […] block and discard certain data, and to focus and stress others, so as to arrive at an explanation which reconciles -E and P, despite being clearly faulty and below the usual standards of the same subject in different cases’ (Galeotti Citation2018, 48). The implication is that such an observer, detached from the emotional pressures of the group, would clearly see reality, and be able to persuasively communicate it to the agents in question.

This proposal depends on a ‘dispassionate review of the available data’. However, ‘the available data’ are often not straightforwardly available. As Galeotti recognizes elsewhere, self-deception is part of a process of inquiry extended over time. This means that we cannot simply assume settled knowledge as the archimedean point from which to mount our critique. It is easy to condemn Colin Powell from our vantage point now, when the various errors and deceptions that led him to believe that Iraq possessed weapons of mass destruction have been made apparent (after a long and messy struggle). We are tempted to ask: Why could he not see what now seems so obvious? But if we are to imagine ourselves within a process of inquiry, we must ask a slightly different question: At what precise point does it become irrational to cling to a particular theory in the face of mounting negative evidence? And this question, as Thomas Kuhn (Citation1996) notoriously suggested, has no clear answer.

Turning to the idea of the unemotional or ‘dispassionate’ observer, the assumption seems to be that without emotions clouding our individual and collective processes of inquiry, we would unproblematically converge on one right answer, or at least, one right set of possible answers. Yet emotions are not the only way in which values enter into epistemic judgements. To put it another way, simply being a dispassionate seeker after truth does not get one off the hook from the sorts of value judgements that must be made in assessing evidence in complex situations. Even a scientist, when a hypothesis is not verified with absolute certainty, must accept or reject it on grounds that the evidence is sufficiently strong, and ‘sufficiently strong’ is itself ‘a function of the importance, in a typically ethical sense, of making a mistake in accepting or rejecting the hypothesis’ (Rudner Citation1953, 2).Footnote2 This problem is of particular relevance in the kinds of decision-making situations that are the focus of this book, where a president is choosing how far to rely on the judgement of her advisors. Whether we want to take this ‘inductive risk’ depends on what we are prepared to stake on the chance of the claim being right or wrong. And this will be different for different people.

Now, Galeotti clearly recognizes this point, for the pivotal mechanism within her model of self-deception is that an emotionally loaded wish alters the relative ‘costs of inaccuracy’ (Galeotti Citation2018, 49), which in turn influences the selection and appraisal of evidence. But she focuses on the role of emotion in distorting these costs, whereas the arguments from inductive risk suggests social and political values may be doing significant work. Regardless of whether it is dispassionate or not, the assessment of evidence will involve judgements that are informed by values. If this holds for scientists, it is even more obviously relevant when we consider the sorts of experts in advisory roles considered by Galeotti (Steele Citation2012). If we recognize that selection and assessment of evidence is (at least in this wayFootnote3 ) value-laden, then it becomes less clear that we can simply rely on a mechanism of ‘dispassionate’, emotionally unconnected, and personally independent observers, to do the job of overseeing decision-makers. One can well imagine a group of climate scientists being selected for such a job, whose careers are independent of pleasing political masters, and who are not emotionally involved in a particular collective decision-making group, but who would nonetheless be open to accusations of having values that are at play in the assessment of evidence, in virtue of their disciplinary formation, methodological commitments, personal backgrounds, and so on.

Resisting collective self-deception by multiplying partiality

Yet there is a deeper problem with the idea of impartial umpires providing a reality check within processes of group reasoning, and it relates to the role of disagreement and dissent. Galeotti emphasizes – as does almost every other commentator on these episodes, from Irving Janis to Sir John Chilcot – the need for ‘honest disagreement and wide, open and frank discussion, challenging received intelligence’ (Galeotti Citation2018, 243). As a means to avoid self-deception, she thus advocates pre-commitment in the form of ‘an authorized referee in political cases’ (Galeotti Citation2018, 110), who would have both personal and cognitive independence (Galeotti Citation2018, 111). That is to say, the personal prospects of the referee or umpire should not depend upon the leader or decision-makers in question, and they should be so detached from the motivational dynamics of the team that they are ‘in the position to review the plan with unprejudiced eyes and to assess the truth-value of data interpretation with a diagnostic attitude’ (Galeotti Citation2018, 111). These desiderata could, she suggests, be met by appointed ‘devil’s advocates’ within the team, or perhaps by an ‘independent body of overseers’ (Galeotti Citation2018, 111). While she clearly acknowledges that devil’s advocates invoke ‘the spirit of the adversary system in courts’ (111), she is not particularly interested in the details, and does not make much of the very different logics at work in these two proposals: one involves the idea of impartial observers overseeing political judgement; the other involves the production of countervailing forms of partiality. Galeotti gives particular emphasis to the logic of impartiality, but the logic of countervailing forms of partiality may be a more promising way to check against collective self-deception.

That is, rather than seek to introduce actors with a lack of emotionally overloaded wishes, we might focus on ways of including in collective deliberative processes a range of actors with different goals and wishes. These are the people who are likely to appraise and assess evidence in ways that counter and check the self-deception of the decision-makers. The advantage of pressing this line of thought is that it does not rely on the removal of motivated reasoning. Rather, it sets motivated reasoning against itself. This political-epistemic insight goes back to Mill, but it has been recently revived in arguments for collective wisdom and the value of epistemic diversity (Landemore Citation2013). The point is not that diversity as such has epistemic value, but rather that adversarially structured engagement among diverse reasoners serves to improve the evaluation of arguments. Mercier and Sperber observe that there is an asymmetry between the production of arguments and their evaluation. When we produce arguments, they suggest, we seek out evidence supporting our initial positions with the aim of persuading others. When we evaluate arguments, however, we aim to sort good arguments from bad, and genuine information from misinformation. Their point is that the process of reasoning is not contained wholly within a single mind, but is distributed across a group (Mercier and Sperber Citation2011). That is, in order for a group to remain sensitive to new evidence, it is necessary that the group retains a diversity of perspectives and goals in light of which different evidence will be sought, and the same evidence will be differently tested, weighed, and evaluated.

Pushing this insight to its limit, it’s possible that individual or local-level self-deception can be productive at the level of collective reasoning, provided it is the product of different emotionally loaded wishes than those operative in the rest of the group. This, I would suggest, is the force of Mill’s oft-quoted argument for the protection of minority opinion. It is of course commonly recognized that, as Gerry Mackie puts it, ‘the minority improves public judgment by activating a validation process in the otherwise conformist majority. Respecting the minority discourages deluded consensus … ’ (Mackie Citation2006, 298). What is less obvious is that the continued presence of such minorities clearly implies that there is epistemic value – at the collective level – in some individuals or groups holding on to their positions even when it might be rational to give them up. This, after all, is the logic of the devil’s advocate position itself: it involves defending a position you do not really hold, probably because you regard the position as not tenable on the balance of the evidence. To maintain a minority position in the face of the evidence is hard, and might be underpinned by ideological commitments or (perhaps) an emotional wish-based motivation. Indeed, the characteristic resistance to new evidence exhibited by the self-deceived could also, presumably, serve as resistance against conformity to the position of the group. To be clear, I do not think that such diversity is only of value if it involves self-deception or even that it is especially likely to lead to self-deception. My claim is that to the extent that individuals or sub-groups within the wider group do fall into self-deception – being motivated in favour of a particular position, encountering negative evidence, and engaging in a biased search for information to confirm their position – the fact that they do this in service of different positions, that their biased search privileges different lines of inquiry, includes different sources of information, and discounts or minimizes different objections, makes it harder for the wider group to be unaware of the ways in which their own commitments are shaping their judgements.Footnote4

Thus, while Galeotti frames individual and collective self-deception as being involved in a mutually supportive dynamic, she misses the way in which her own argument allows for their separation, and further, for the possibility that individual self-deception can – in the right circumstancesFootnote5 – be a check against collective self-deception. This is only a possibility. It is of course possible that the very same process of self-deception that leads a group to discount and marginalize counter-evidence could also lead to the discounting and marginalization of minority positions within the group. However, if we grant that political actors will always be tempted to marginalize uncomfortable evidence and not invest in lines of inquiry that seem likely to produce such evidence, or seem unnecessary because they already have sufficient evidence for their desired conclusion, then we have to ask what sorts of institutional structures will make such behaviour more difficult. My suggestion here is that when group deliberations are organized without attention to diversity and structured argumentative engagement – for instance, when deliberations are organized through consensus procedures – it is much easier to marginalize dissenting voices. Rather than treat collective self-deception as a process in which individual and collective self-deception reinforce one another, we can see ways in which – in theory at least – they might check against one another, and be made productive for the quality of group deliberation. This opens up a slightly different set of questions we might ask about collective self-deception: When, how and under what conditions might individual (or sub-group) self-deception reinforce collective self-deception, and when might it provide checks against it?

Conclusion

The approach I advocate here – of generating more diversity, and in particular adverserially organized diversity, within decision-making groups – is hardly original. Janis’ (Citation1982 [1972]) theory of ‘groupthink’ has passed into the wider cultural lexicon, and in recent democratic theory there has been renewed attention to epistemic pitfalls of ‘cloistered experts’ (Ober Citation2008, 1) and the risks associated with expert consensus (Moore Citation2017). Recent studies of the political epistemology of expertise have highlighted the value of adversarially organized expertise (Turner Citation2003; Pamuk Citation2020). Yet in practice it is hard to follow through on these insights. It is not a coincidence that Galeotti’s examples of political self-deception are all located in the realm of international politics. These groups are typically making decisions under great uncertainty, extreme time pressures, and with profound human consequences. They are also, as Ellsberg’s comment from the beginning of this essay emphasizes, drawing on highly restricted information that limits the scope for a diversity of similarly informed judgements. In short, the requirements of action on the international stage in urgent moments of crisis are precisely those that open up the risk of self-deception and which preclude the obvious solutions, such as the introduction of impartial observers, or of diverse competing groups with a capacity to conduct inquiry. The circumstances that make self-deception more likely are precisely those that make counter-measures particularly impractical. Yet Galeotti is surely right, in the broadest sense, to draw our attention to the question of how we navigate the tension between performing our roles within systems and monitoring the performance of the system itself, and to think creatively about the ways we might reform those systems.

Disclosure statement

No potential conflict of interest was reported by the author.

Notes

1 I would like to thank all the participants in the workshop on Galeotti’s Political Self-Deception hosted by the York Centre for Political Theory at the University of York on 5 April 2019, and in particular Alasia Nuti and Gabriele Badano for organizing the workshop and to Anna-Elisabetta Galeotti for her generous and thoughtful responses.

2 See Winsberg Citation2012 for an excellent recent discussion of this argument in the context of climate modelling.

3 There are of course many other ways in which values can influence knowledge production (see Douglas Citation2009).

4 It is also important to note that I am in no way endorsing fraud, deception, manipulation, suppression of evidence and so on. Rather, I follow Galeotti’s own assumption that self-deception is not to be reduced to lying; her central claim is that it can unwittingly emerge in the reasoning of well-intentioned inquirers. My point is that rather than simply exhorting leaders or their advisors to be more careful, or supposing that there is an archimedean point of unbiased assessment of evidence from which they could be checked, we should instead look to diversity and adversarial engagement to bring implicit commitments to light.

5 Those circumstances include, importantly, some willingness on the part of group leaders to tolerate and even encourage diversity in the deliberating group.

References

  • Bok, S. 1980. “The Self-Deceived.” Social Science Information 19 (6): 16–25. doi:10.1177/053901848001900602.
  • Douglas, H. 2009. Science, Policy, and the Value-free Ideal. Pittsburgh, PA: University of Pittsburgh Press.
  • Ellsberg, D. 2002. Secrets: A Memoir of Vietnam and the Pentagon Papers. New York: Penguin Books.
  • Galeotti, A. E. 2018. Political Self-Deception. Cambridge: Cambridge University Press.
  • Janis, I. 1982 [1972]. Groupthink: Psychological Studies of Policy Decisions and Fiascoes. Boston: Houghton Mifflin.
  • Kuhn, T. 1996. The Structure of Scientific Revolutions. 3rd ed. Chicago: University of Chicago Press.
  • Landemore, H. 2013. Democratic Reason: Politics, Collective Intelligence, and the Rule of the Many. Princeton and Oxford: Princeton University Press. doi:10.23943/princeton/9780691155654.001.0001
  • Mackie, G. 2006. “Does Deliberation Change Minds? Politics.” Philosophy & Economics 5 (3): 279–303. doi:10.1177/1470594X06068301.
  • Mercier, H., and D. Sperber. 2011. “Why Do Humans Reason? Arguments for an Argumentative Theory.” Behavioral and Brain Sciences 34: 57–111. doi:10.1017/S0140525X10000968.
  • Moore, A. 2017. Critical Elitism: Deliberation, Democracy, and the Politics of Expertise. Cambridge: Cambridge University Press.
  • Ober, J. 2008. Democracy and Knowledge: Innovation and Learning in Classical Athens. Princeton, New Jersey: Princeton University Press.
  • Pamuk, Z. 2020. “‘The People Vs the Experts: A Productive Struggle’, in Moore, A., Invernizzi-Accetti, C., Markovits, E. et al. Beyond Populism and Technocracy: The Challenges and Limits of Democratic Epistemology.” Contemporary Political Theory. https://doi.org/10.1057/s41296-020-00398-1.
  • Rudner, R. 1953. “The Scientist Qua Scientist Makes Value Judgments.” Philosophy of Science 20 (1): 1–6. doi:10.1086/287231.
  • Steele, K. 2012. “The Scientist Qua Policy Advisor Makes Value Judgments.” Philosophy of Science 79 (5): 893–904. doi:10.1086/667842.
  • Turner, S. P. 2003. Liberal Democracy 3.0: Civil Society in an Age of Experts. London: SAGE Publications.
  • Winsberg, E. 2012. “Value Uncertainties in the Predictions of Global Climate Models.” Kennedy Institute of Ethics Journal 22 (2): 111–137. doi:10.1353/ken.2012.0008.