2,177
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Judging Expert Trustworthiness: The Difference Between Believing and Following the Science

ORCID Icon

ABSTRACT

Expert-informed public policy often depends on a degree of public trust in the relevant expert authorities. But if lay citizens are not themselves authorities on the relevant area of expertise, how can they make good judgements about the trustworthiness of those who claim such authority? I argue that the answer to this question depends on the kind of trust under consideration. Specifically, I maintain that a distinction between epistemic trust and recommendation trust has consequences for novices judging the trustworthiness of experts. I argue for this by identifying the unique difficulties that emerge when a novice is asked not just to believe expert testimony, but to follow expert recommendations. I outline criteria for novice judgements of expert trustworthiness that have been proposed by Elizabeth Anderson and show that novel problems emerge for her criteria when we shift focus from epistemic trust to recommendation trust. More is needed when we are asked not just to believe the experts but to act as they recommend, because novices looking for trustworthy expert recommendations need to establish whether the recommended course of action supports what is important to them and accords with their values.

1. Introduction

The public health measures that have been used to deal with Covid-19 require public trust for effective implementation. The question I address in this paper concerns the scientific experts who have been included in the decision-making process behind these measures. We have all been asked to put our trust in experts. But if public trust is only valuable when it is trust in the trustworthy (O’Neill Citation2018), then a successful response to the pandemic requires public trust based on sound judgements about expert trustworthiness.Footnote1 This generates a problem: if novices are ex hypothesi not themselves authorities on the relevant area of expertise, how might they judge the credibility of those who claim such authority? The task of judging the trustworthiness of a putative expert is all the more challenging in the high-stakes context of the pandemic, when the decisions we make could have fatal consequences. If these decisions are to be based on the reasoning of others, then we need to be sure that we are outsourcing our deliberation to the right people.Footnote2

I do not claim in this paper to solve this problem. My much more modest proposal is that the nature of the problem and its solution depend on the kind of trust under consideration. Below I apply a distinction between two kinds of trust to the problem of how novices can judge expert trustworthiness. The two different kinds of trust are what I call epistemic trust and recommendation trust. I use the term ‘epistemic trust’ to refer to believing something because the trusted person has told me it is the case. This is the kind of trust solicited from the public when they are asked to have confidence in a division of epistemic labour between experts and novices. I use the term ‘recommendation trust’ to refer to believing that I should do something because the person I recommendation-trust has told me I should. When publics are asked to extend this kind of trust to experts, they are asked to outsource not just part of their epistemic deliberation, but also part of their practical deliberation; to follow a course of action because they have confidence in the expert’s judgement that this is the right thing to do.

I argue in this paper that the means a novice can rely on for judging the trustworthiness of an expert vary depending on whether the trust solicited from the novice is epistemic trust or recommendation trust. There is a well-established literature in philosophy of science and social epistemology about the conditions that must hold for experts to merit the epistemic trust of novices.Footnote3 But more is needed when we are asked not just to believe the experts but to act as they recommend.Footnote4 As I argue below, novices looking for trustworthy expert recommendations need to establish whether the recommended course of action supports what is important to them and accords with their values.

Sometimes public experts make recommendations that do indeed serve the public interest, and effective communication of that fact can indicate to a novice public that the experts’ recommendations can be trusted. But not all social and political contexts allow for such a frictionless alignment of expert advice and public values. Sometimes intractable disputes about what is truly important (e.g. should we prioritise liberty or health?) mean that there is no single, uniform public, and multiple publics disagree about what is most important. In such a context, any policy that successfully aligns with the values of some will fail to align with the values of others. At other times some members of the public will have good reason to be wary of expert recommendations because their community has been historically or systematically alienated from expert institutions. In this context, some of the reliable indicators for epistemic trustworthiness, such as consensus among the experts, can also function as indicators of an in-group/out-group value divide between experts and novices. When this happens, novices can have good reasons to withhold trust in the recommendations of experts despite their epistemic trustworthiness.

In the following sections I argue that the distinction between epistemic and recommendation trust has consequences for novices judging the trustworthiness of experts. I argue for this by identifying the unique difficulties that emerge when a novice is asked not just to believe expert testimony, but to follow expert recommendations. In section 2 I outline criteria for novice judgements of expert trustworthiness that have been proposed by Anderson (Citation2011). I will discuss some objections to the practicability of her criteria, but my ultimate goal will be to demonstrate that, regardless of their merits on Anderson’s own terms, novel problems emerge for her criteria when we shift focus from epistemic trust to recommendation trust. In sections 3 and 4 I outline further detail of the distinction between epistemic and recommendation trust, showing how the conditions for well-placed epistemic trust might be fulfilled while the same for recommendation trust are not. Finally in section 5 I return to Anderson to argue that new problems arise for her criteria when experts are not just offering testimony, but making recommendations.

2. Indicators of Trustworthiness

Anderson (Citation2011) proposes criteria for judging expert trustworthiness that she believes can be used by anyone with a high-school education, access to the internet, and rudimentary web-literacy. Though I do not intend to give a comprehensive defence of these criteria, they will be useful for illustrating how a distinction between epistemic- and recommendation-trust bears on the problem of novices judging expert credibility.

Anderson proposes four criteria. First, Anderson suggests that novices can make a reliable-enough judgement about a putative expert’s level of expertise through appeal to their academic credentials, information typically accessible through a simple online search. Second, Anderson proposes a variety of kinds of evidence novices could appeal to to assess the honesty of experts, including a track record of demonstrable dishonesty, a habit of misrepresenting opponents, or a conflict of interest that gives us reason to be sceptical. Anderson names her third criterion epistemic responsibility. As Anderson notes, ‘one’s claims are suspect if one fails to hold oneself accountable to the demands for justification made by the community of inquirers’ (Anderson Citation2011, 146). Though novices might not be in a position to know whether or not an expert’s response to dissenting views meets the relevant academic standards, they are at least able, Anderson suggests, to spot what she calls ‘dialogic irrationality’ (Anderson Citation2011, 147): a response to objections that does not even have the form of a counter-argument, such as continuing to repeat prior claims as if no objections had been raised. Finally, Anderson proposes that novices can be sufficiently confident in the trustworthiness of scientific experts if those who meet the above three criteria reach a consensus on the claim in question. Whether or not there is scientific consensus on a given matter is not always wholly transparent to novices, but Anderson is optimistic, suggesting that we could use surveys of experts in the field, meta-analyses of peer reviewed literature, or reports from representative institutions such as the National Academy of Sciences (Anderson Citation2011, 149).

Some of these criteria are easier to deploy than others, and critics have raised doubts about how feasible it is for non-experts to use them. Not all of these doubts are, I believe, a significant threat to Anderson’s criteria. Brown (Citation2014, 60), for instance, objects that academic credentials are only helpful when the expertise under scrutiny is scientific, because many experts – environmental justice activists, farmers, factory workers – are non-academic. But it is not evidently illegitimate to stipulate a focus, as Anderson does (Anderson Citation2011, 146), specifically on academic expertise, particularly when we are concerned with novice citizens making judgements of trustworthiness about public health experts. Johnny Brennan also worries that a novice who uses Anderson’s criteria will be unable to spot an ‘epistemic trespasser’, someone with impressive academic credentials who decides to speak about a subject on which they have no expertise (Brennan Citation2020). But Anderson’s explanation of her expertise criterion already includes a hierarchy of expertise that promotes academics with subject-relevant credentials over academics who are credentialed but unqualified in the relevant field. And if, with Brennan, we worry that allowing non-specialist academic credentials even a demoted place on this hierarchy of expertise still gives too much credibility to dilettantes, we need simply remove this from Anderson’s hierarchy of expertise. The limited success of Brown and Brennan’s objections indicates that we must curb our enthusiasm for academic credentials, but the criterion is not thereby shown to be unhelpful for novice judgements of expert trustworthiness.

Brennan also worries that a novice will be ill-placed to judge whether a putative expert is guilty of dialogic irrationality because a person must first be familiar with the relevant scientific consensus before being able to judge whether a dissenter has adequately responded to arguments against their position (Brennan Citation2020, 231). Related worries have also been raised by Stephen John, who has argued that folk philosophy of science can lead novices to hold scientists to inappropriate professional standards, which might include a distorted view of what dialogic rationality demands (John Citation2018). But novice unfamiliarity with the appropriate norms for scientific practice is not evidently a fatal problem for Anderson’s criteria. Those criteria are designed for use by a novice who, though not in a position to become an expert, is willing and able to do at least some rudimentary research to inform themselves about the claims that are made by scientists and the degree of consensus about those claims in the scientific community. There are many reasons why novices would be unwilling to seek out or accept this kind of information, but this does not mean that novices are unable to improve their ability to reliably distinguish dialogic rationality from irrationality. The extent of the problem of how a novice can judge the trustworthiness of an expert is exaggerated if we assume a false binary between a passive and wholly ignorant novice and a knowledgeable expert.

However not all of Anderson’s proposed means for deploying her criteria are so easily defended. Some do indeed expect more familiarity with scientific practice than we can reasonably expect from most novices, such as her suggestion that non-experts could rely on meta-analyses of peer-reviewed research to judge whether scientists are in agreement (Anderson Citation2011, 149). And as Melissa Lane has rightly observed (Lane Citation2014, 104), Anderson’s appeal to academic qualifications is suited to situations in which credentialed scientists compete with what Anderson calls ‘crackpot theories’, but not so well suited to situations in which a plurality of scientific disciplines lay claim to expertise about the same subject matter. When equally-qualified scientists eminent in their respective fields disagree about, for instance, whether children should attend primary schools in person during a pandemic, credentials are not so helpful for deciding whom to trust.

This is not the place to reach a definitive judgement of the viability of Anderson’s criteria, much less a comprehensive solution to the problem of how novices can reach a reliable judgement about the credibility of experts. I submit that for the sake of argument we can accept Anderson’s criteria as workable indicators of epistemic trustworthiness, with the caveat that these criteria have room for improvement. Note also that criticisms of Anderson’s criteria have focussed not on the conditions for trustworthiness that the criteria are designed to track (competence, sincerity, responsibility) but on the plausibility of novices being able to judge whether those conditions hold. For the rest of this paper, I argue that when we apply a distinction between epistemic and recommendation trust, we see that a new kind of problem emerges when experts are issuing recommendations for action.

3. Epistemic and Recommendation Trust

Anderson’s criteria are designed to mitigate a problem that faces novices trying to judge expert trustworthiness. This problem is generally understood to be an epistemic problem. It seems natural to assume that when I am trying to decide whether I should trust an expert, I am trying to decide whether I ought to believe what the expert tells me. Whether I believe them might have practical consequences, but the underlying problem is usually understood to be the domain of epistemologists. But not all trust is epistemic.

Philosophical work on trust sometimes distinguishes epistemic trust from other forms of trust (e.g. O’Neill Citation2018; Rolin Citation2020). One way we can differentiate epistemic trust is to contrast it with what I call practical trust. Roughly speaking, to have epistemic trust in a person is to believe something solely or primarily for the reason that the trusted person has told me it is the case. By contrast, to have practical trust in a person is to stake something important on the trusted person behaving in a certain way that I expect from the trusted. I might practical-trust a babysitter by placing a child in their care, and I might epistemic-trust the same babysitter by believing them when they tell me about the child’s misbehaviour at the end of the night. Epistemic and practical trust differ from a third kind, which I call recommendation trust: believing that I should do something because the person I recommendation-trust has told me I should. Recommendation trust differs from practical trust partly because I can endorse a person’s recommendation without staking anything of importance on the trusted person’s actions, as I do when I practical-trust the babysitter. In this respect, recommendation trust is similar to epistemic trust. But there are also differences between recommendation trust and epistemic trust. The most important difference for the purposes of this paper lies in the different conditions that must be fulfilled in order for trust to be well-placed. What counts as a good reason to trust varies depending on whether that trust is confidence in the actions of another, believing another’s factual testimony, or accepting their recommendation. Moreover, and most importantly for my purposes, a novice could have sufficient reason to think an expert is trustworthy in both the epistemic and practical senses, but not to trust their recommendations.Footnote5

To see why this is the case, consider trust in doctor-patient relationships. Doctors can play multiple roles in the treatment of a patient: explaining treatment options to patients; recommending some options over others; and administering treatment themselves. Though providing information and administering treatments are uncontroversial practices for most health practitioners, whether a doctor should recommend treatment options is a contested issue (Anderson, Cimino, and Lo Citation2013). Patients who value physician recommendations will sometimes solicit those recommendations, which can leave doctors with difficult decisions about whether answering a request for advice will undermine the patient’s autonomy (Sherlock et al. Citation2019). However, while there is little consensus on the appropriate scope for recommendations in the doctor-patient relationship, recommendations regarding e.g. cancer treatment (Frongillo et al. Citation2013), cardiopulmonary resuscitation (Anderson, Cimino, and Lo Citation2013), or participation in medical trials (Yeomans Kinney et al. Citation1998), are a common feature of medical practice.

Whether doctors are treating, informing, or recommending, patients must judge for themselves whether they ought to trust their doctor. But the factors that are relevant to that judgement vary depending on the role the doctor plays and the kind of trust at stake. For my purposes, the difference between trust in doctors when they inform and when they recommend is most important.

Sometimes patients hold beliefs about their treatment options on the basis of what their doctor has told them. These beliefs are at least partly determined by the patient’s epistemic trust in their doctor. Where this trust is not merely blind faith, it is based on the patient’s judgement that their doctor is trustworthy in their role of information-provider. This is likely to be a complex judgement about a range of qualities that are relevant to the trustworthiness of the doctor’s testimony. Depending on our preferred theory of epistemic trust, we might say that the patient will consider whether the doctor is well-informed, competent, sincere, a moral person, or all of the above. However the judgement is reached, if it positive it allows the patient to use the fact that the doctor has told them that P as a reason to believe that P, and on that basis consider how to proceed with their treatment.

Patients receiving recommendations from their doctor must also make a judgement about whether they ought to trust those recommendations. A patient’s response to a doctor’s recommendation can range from complete deferral (‘I will do this for no other reason than my doctor has told me I should’) to complete dismissal (‘I need a different doctor’), but in most scenarios our trust in doctors when they make recommendations is likely to lie somewhere between these two extremes, and will always stop short of full surrogacy. To trust in a doctor’s recommendation, in its strongest form, is to take the fact that the doctor recommends a particular treatment option as reason enough to pursue that treatment. But even in the case of extreme deferral, the decision to pursue that treatment still lies with the patient.

Any level of deferral to a doctor’s recommendation requires its own judgement of the trustworthiness of the doctor in their role as medical advisor. Crucially, a patient can reasonably judge both that their doctor is trustworthy in their role as informer, and that their doctor is not trustworthy in their role as advisor; the two can come apart. A patient might accept at face value the medical information received from their doctor, but nonetheless judge that their doctor does not adequately understand what is most important to them, and thereby does not have a good grasp of which treatment option will best reflect their values. Unless the patient also has reason to be confident in their doctor’s understanding of how their treatment options serve their interests, they might be willing to defer to the medical experts on matters of fact, but not on what to do about those facts.

Consider the following outline of a clinical case (slightly adapted from Emanuel and Emanuel Citation1992). A 43-year-old patient consults her doctor about treatment options for a breast mass that is revealed to be a ductal carcinoma, with no evidence of metastatic disease. Her doctor outlines a range of treatment options that they claim are appropriate for this particular cancer. The patient is happy to accept the information without looking to verify it by, for instance, fact-checking with another doctor. The patient thereby epistemic-trusts the doctor.

But say also that the doctor follows with a recommendation. They recommend surgery and radiation therapy as a localised response to the mass, and chemotherapy to prevent the spread of the cancer to other parts of the body. The doctor also explains that chemotherapy will come with strong side-effects that will cause months of great discomfort for the patient. This will be particularly difficult for the patient to cope with given that she has recently divorced and returned to work (Emanuel and Emanuel Citation1992, 2222).

The patient must now decide how willing she is to defer to the doctor not just on the relevant medical facts, but on her best treatment option. And she could judge that the expert ought to be trusted with the facts, but not with the choice of treatment, without faulty reasoning. Perhaps she believes that her doctor has underestimated how important it is to her to return to work at this stage in her life. Perhaps she believes the doctor has overestimated how willing she is to sacrifice her immediate plans in the interests of minimising the risk that the cancer will spread. Perhaps the patient believes the doctor has given no thought at all to her personal circumstances, and their recommendation is based solely on minimising health risks without regard for anything else that the patient values. These worries about whether the doctor has understood her values and how they bear on her medical decision-making could be enough to prevent her from taking the doctor’s recommendation as a reason for pursuing the recommended treatment. Most importantly, this could be the case without the patient changing her mind about the trustworthiness of the doctor’s testimony. She might trust the doctor’s testimony yet not trust their recommendation.

It is tempting to infer from the role of values in the above account that to trust the recommendation of an expert we need not only have confidence in their epistemic, professional, and moral virtue, but also confidence that our values align with those of the expert. But this is not quite right. Though a novice might use the expert’s own values as a heuristic for recommendation trustworthiness, alignment of values is not necessary. In the cancer treatment case, the doctor’s evaluation of the benefits and drawbacks of chemotherapy might be different to the patient’s; their values might not align. Yet the doctor might nonetheless be familiar with what the patient cares about and make a recommendation on this basis rather than on the basis of the doctor’s own values. If the patient is not only confident in her doctor’s medical competence, sincerity, and moral character, but also believes that they understand what is important to her and will make recommendations on that basis, then she has reason to think that their recommended course of action is the right one for her. The doctor’s additional understanding of the patient’s values might be secured by the fact that the doctor and the patient are relevantly aligned, but it need not be.

In this regard, and stated more generally, reasonable recommendation trust in experts requires only that the novice believes that the expert has good reasons to believe that their recommendations align with the novice’s values.Footnote6 This confidence in the practical deliberation that I have outsourced to an expert could be grounded in my belief that the expert and I agree on what is important, or it could be grounded in other indicators that the expert understands my values and will make recommendations that serve those values.

4. Public Experts, Values, and the Scope of Expertise

In section 5 I return to the question of which criteria a novice can reliably use to judge the trustworthiness of an expert. Before doing so, I want to consider three objections to my central distinction between epistemic- and recommendation-trust. The first is that if a patient chooses not to defer to their doctor on which treatment is best, this is because they do not believe that the doctor’s expertise extends beyond the role of providing information. We might argue, then, that the patient makes an independent decision not because they withhold a particular kind of trust in experts, but because they think the doctor is an expert only in medical facts and not in treatment decisions.

But while some patients deny the very idea that a doctor could, in their capacity as medical expert, legitimately guide their medical decision, some patients are in principle open to doctors providing expert guidance in the form of treatment recommendations (Anderson, Cimino, and Lo Citation2013). For these patients, recommendation trust in doctors is a possibility. It is, moreover, still a kind of trust in experts, despite the fact that the expertise that warrants epistemic trust is not enough to warrant deferral to a recommendation. When I ask a doctor what they think I should do, I do so partly because of their technical proficiency and their professional experience in guiding other patients in similar situations. I thereby look to doctors for a recommendation in their capacity as a medical expert, and I follow that recommendation if I have the appropriate kind of trust in this expert.

The second objection is that the distinction between epistemic and recommendation trust is largely irrelevant for most contexts in which experts play a significant public role, such as during the pandemic. One might argue that in these contexts any additional reluctance the public has to trust the experts is explained not by invoking a different kind of trust, but by the fact that in a pandemic the information we are asked to accept from experts has great practical consequences.

But a difference between more and less consequential testimony cannot fully explain the distinctive role that experts play when guiding novice publics in contexts like the pandemic. There are a variety of reasons a person might have for refusing to defer to the recommendations of public health experts regardless of whether they trust the testimony of the same experts. Consider three hypothetical cases. Person A accepts the facts about the pandemic as they have been presented by public health scientists, but suspects that the values driving the recommendations from those same scientists are the wrong values to use in pandemic policy. Specifically, they believe that the value of personal liberty is likely to have played too small a role in expert deliberation, and on that basis are not willing to defer to public health officials on the best course of action. Person B accepts the relevant expert testimony and unlike person A believes that they share the values underpinning the expert recommendations, but is worried that the experts are likely to have a misguided interpretation of the best strategy to support those values. Specifically, person B agrees with the experts that maintaining a functioning health service is a priority in their society’s response to the pandemic. But person B thinks that doing this requires a strategy that minimises economic repercussions and threats to future funding for health, rather than a strategy, suggested by the experts, that minimises hospital admissions while risking severe economic consequences. Finally, Person C is a member of an ethnic minority who also accepts the relevant expert testimony, but suspects that public health recommendations have overlooked the distinctive problems faced by marginalised communities like their own, and believes it is likely that the experts have unwittingly produced policy that is better suited to some, more privileged, sections of society. Person C is worried, for instance, that a pandemic response strategy that depends significantly on a work from home directive will favour those with middle-income office jobs that can be more easily transferred to home-working, and come with additional costs for workers in lower-income service-sector jobs.

We have then at least three different ways in which novice members of the public might accept the facts as they are presented by public health experts, yet have reason to be reluctant to defer to the same experts on the question of what they should do in response to those facts. The public might suspect that the wrong values are leading public health policy; they might suspect that the policy goals are right but the strategy is likely to be wrong; or they might suspect that the policy is likely to be partial, perhaps prejudiced. The point is that in each of these scenarios the novice can refuse to defer to expert guidance regardless of their willingness to defer on matters of fact, no matter how consequential those facts are. In my terms, these are scenarios in which epistemic trust is achieved, but recommendation trust is absent.

The third objection is that the conditions for epistemic- and recommendation-trustworthiness are not so different as I have suggested because values also play a role in epistemic trust. As many philosophers of science have argued, it would be a mistake to deny that values make legitimate contributions to scientific conclusions. We might for instance cite the role of values in setting the parameters for inductive risk i.e. the risk of false negatives or false positives generated by underdetermined inductive inferences from evidence (Douglas Citation2000). Thus, for example, the epidemiological modelling used to reach expert testimony about likely spread of the virus will be shaped by whether we are more willing to risk overestimates or underestimates of infection rates. If epistemic reasons underdetermine the choice between these risks, then we must turn to what we value to determine where we place the risk of error (are we more concerned about being overly cautious, or being too relaxed?). Given the role of values in expert testimony, it seems that reliable judgements of epistemic trustworthiness should also include an assessment of the values informing expert testimony; perhaps, for instance, I should not epistemic-trust competent, sincere, and communicatively responsible epidemiologists who are significantly less concerned than me about the virus spreading.

I am sympathetic to this objection insofar as it supports a broader sentiment that I hope is reflected in this paper: novices withholding trust in experts do not always do so because they are misinformed, ignorant, or irrational. Even well-placed epistemic trust can sometimes require some form of value-agreement between novices and experts. However, I maintain that the distinction between epistemic and recommendation trust is still needed. This is because the values that legitimately inform expert testimony and the values relevant to expert recommendations can diverge. Both persons B and C in the three cases outlined above could plausibly agree with the values that underpin the epidemiological modelling informing expert recommendations. Perhaps for instance they believe that the epidemiologists are correct to prioritise caution against preventable spread of the virus, and hence correct to accept a marginal risk of erroneously pessimistic forecasts. The conditions are in place for them to epistemic trust the experts, including agreement with the values informing the expert testimony. Nonetheless, persons B and C might still reasonably refuse to defer to the same experts on what we should do in response to the predictions generated by the models, either because they suspect public health officials will tend to favour the wrong measures, or because they suspect public health policy is likely to favour other social groups. This reluctance to defer on the matter of recommendations can stand independently of their trust in the modelling.

5. Revisiting Indicators of Trustworthiness

My primary claim in this paper is that the difference between recommendation trust and other forms of trust is significant for the criteria novices can use to make reliable judgements of expert trustworthiness. My way of demonstrating this will be to return to Anderson’s criteria and show that they face unique problems when the public is asked not just to believe the experts, but to follow their recommended course of action.

Some of Anderson’s criteria remain helpful when experts make recommendations. If I have good reason to doubt that an expert believes the recommendations they make, this is reason enough for me not to follow those recommendations. Similarly, if I have good reason to think that an expert is misleading the public – either intentionally, or by negligently issuing easily misinterpreted statements about e.g. the wisdom of mask-wearing – then I also have good reason not to follow that expert’s recommendations. Thus Anderson’s honesty criterion is as reliable for judgements of recommendation trustworthiness as it is for epistemic trustworthiness.

Consider next epistemic responsibility. Perhaps I learn of an expert’s recommendations through a press conference held by the government and its scientific advisers. In this context the expert is subject to questions from journalists and sometimes the questions posed reflect dissent from other experts with a relatively high public profile. How the experts respond to such questions can be a reliable indicator of the trustworthiness of their guidance. Say a journalist asks an expert why they recommend that we wear masks in all public spaces when there is evidence for a significant difference in risk of spread between indoor and outdoor public spaces. Were the expert to respond by denying that such evidence exists, when it does exist, or by simply ignoring the content of the question and repeating what they had previously said, the dialogic irrationality of their response would give me good reason to withhold trust. And if such press conferences do not take place at all, I might surmise that the government and its scientific advisers are avoiding public scrutiny. This might also give me reason to withhold trust in that guidance. We can therefore accept that Anderson’s epistemic responsibility criterion also applies when experts issue recommendations, though we might wish to relabel it as ‘dialogic responsibility’ to avoid a narrowly epistemic resonance.

However, problems arise when we consider Anderson’s other two criteria: expertise and consensus. Anderson’s expertise criterion is designed specifically for judging the trustworthiness of scientists, and accordingly I have granted above that academic credentials could be a reliable indicator for a (albeit relatively well-informed) novice to use. But for judgements of recommendation trustworthiness, a new problem emerges. Even if a scientist is a demonstrable expert in the relevant field, this expertise does not reliably track a capacity to understand the values of the novice-recipient of recommendations, and to issue recommendations informed by this understanding. Public health experts might know all there is to know about the relevant epidemiology, but know little about, for example, the unique balance of risk preferred by end-of-life patients, or the role that religious values can play in deciding what to do during the pandemic. Moreover, a novice might have confidence in the academic expertise of those who are leading public health policy, and accept the information cited as evidence to support that policy, but still have good reason to withhold trust in that policy. This is because, as I argued in sections 3 and 4, reasonable recommendation trust in experts requires more: it requires that the novice has good reason to believe that the experts have good reason to believe that the recommended course of action aligns with the values of the novice.

We might worry that I am indulging selfish scepticism, that is, novices who refuse to comply until they are shown ‘what is in it for them’. But by recommendations aligned with the values of novices, I do not mean recommendations that appeal to a purely self-interested public. My values could include, for instance, the wellbeing of others, including a broad conception of the public good or, closer to home, the wellbeing of clinically vulnerable family members or friends whom I care for. Thus, for example, a person’s refusal to comply with a ‘stay at home’ social distancing measure could be because they believe that the risks that come with mixing households are worth taking for the sake of continuing to provide care for a loved one.

Alternatively, one might object that for someone to be what I am suggesting is reasonably sceptical, they must ignore efforts that have been made during the pandemic to explain the interests served by following public-health policy. In England, for example, one of the few consistent messages relayed to the public from government and government-appointed experts has been: ‘stay at home to protect the NHS [National Health Service]’. This strategy appeals to a value that cuts across the UK political spectrum. One might argue that to withhold trust in this context one would have to ignore such appeals, in which case scepticism begins to look less reasonable.

I have two replies to this. First, the fact that responsible public health policy is already communicated to the public with an appeal to what is important to them shows that we must indeed factor in additional non-epistemic factors when soliciting recommendation trust. Second, reasonable scepticism is still possible in this context. The values and interests that public health policy communicators appeal to might not be my own, or I might be sceptical about the experts’ understanding of those values and of what best serves them. Expert-appeal to the values of novices is fragile for a number of reasons that have little to do with the potential for ignorance or irrationality among novices. In both the US and the UK polling on vaccine hesitancy has shown much greater reluctance to accept the vaccine among black and minority ethnic respondents,Footnote7 a disparity that some argue is the result of greater alienation of these groups from the institutions responsible for public health policy. And in addition to the erosion of trust caused by structural racism, expert-appeal to values is also vulnerable to what we might call ivory-tower scepticism: doubt about experts’ ability to understand the priorities of those who are not members of the scientific community, generated by a perceived social division between an academically-credentialed elite and everyone else. In this respect, academic expertise might not just be an unreliable indicator for recommendation trustworthiness. Where novices have good reason to entertain ivory-tower scepticism, academic credentials might be an indicator of recommendation untrustworthiness.

The danger of ivory-tower scepticism leads us to a problem for Anderson’s final criterion, consensus. This criterion allows novices to make judgements not just about the trustworthiness of individual scientists but also about the trustworthiness of the science community. But this also raises the possibility of a new and potentially more severe perception of alienation between novices and experts. If I have good reason to think that highly-credentialed scientific experts are unlikely to understand what is important to me, then agreement among those experts is not likely to persuade me differently. Indeed, it may even exacerbate my scepticism; I might take agreement among academics on the right course of action as an indication that they share values because of their shared academic experiences. And given that my novice status means I do not share these same experiences, the consensus among the scientific community might lead me to think that differences in our level of expertise align with more significant social and cultural differences that increase the chance of my values diverging from the experts I am asked to follow.

Trustworthiness of the scientific community, when it issues recommendations, is thereby vulnerable to broader social trends that determine the extent to which novices are likely to think of experts not simply as people with more academic qualifications, but people in a significantly different class, and potentially with significantly different interests. Michael Sandel has argued that a large proportion of Americans without access to elite college education suffer from a culture of credentialism, whereby those without college degrees are excluded from opportunities for upward social mobility and widely denigrated as under-educated ‘deplorables’ (Sandel Citation2020; see also similar work in Markovits Citation2019). When access to upper-tier income brackets, as well as the communities that house higher earners and the schools that educate their children, is determined by access to elite-college education, those excluded from such education are particularly vulnerable to political movements seeking to stoke resentment towards universities and other epistemic institutions (e.g. traditional news media). And when such political movements are successful, a consensus among scientific experts on the right course of action is likely to be an indicator to those who feel alienated from the social elite that the experts only understand what is important to their own social group, and not what is important to novices excluded from that elite. In short, when broader trends create strong social divisions between novices and experts, expert consensus might still be a reliable indicator of epistemic trustworthiness, but it is no longer a good indicator of recommendation trustworthiness.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Leverhulme Trust under a Research Leadership Award for the Competition and Competitiveness Project at the University of Essex.

Notes on contributors

Matthew Bennett

Matthew Bennett is a moral and political philosopher working on trust in people, experts, politics, and institutions. He currently works at the University of Essex as a Senior Research Officer on the Leverhulme funded project ‘Competition and Competitiveness’, which investigates conceptual, historical, and normative questions about competition and competitive social relations.

Notes

1. I follow O’Neill in treating trustworthiness as whatever it is that makes trust well-placed where it is. It is important not to confuse this with another natural way of speaking about trustworthiness, that is, the virtue held by trustworthy people. When discussing judgements of expert trustworthiness I am not referring to judgements of whether experts are trustworthy people; I am instead discussing judgements of whether trust in the expert would be well-placed.

2. I owe the outsourcing metaphor to Nguyen (Citation2020).

3. See e.g. Douglas (Citation2009), chapter 6, Goldman (Citation2001), John (Citation2018), Rolin (Citation2020), Schroeder (Citation2021).

4. I use the term ‘recommendation’ as a shorthand for a category of speech-acts including recommendation, guidance, advice, direction, and instruction. I will bracket differences within this category, as well as the important difference between expert recommendation and legal mandates enforced by the state. A separate analysis would be needed to understand the role of trustworthiness in the latter.

5. I discuss the difference between the rationality of epistemic and recommendation trust in greater detail in Bennett (Citation2020).

6. Why not say that reasonable recommendation trust in experts requires only that the novice believes that the expert’s recommendations align with the novice’s values? Because were this the case, we could trust an expert without outsourcing deliberation at all. For I could believe an expert’s recommendations align with my values, not because of anything about the expert, but because I already know what course of action aligns with my values, and it so happens that the expert has recommended the same course of action. This is not trust, but recognition of a coincidence.

7. e.g. COVID Collaborative (Citation2020).

References

  • Anderson, Elizabeth. 2011. “Democracy, Public Policy, and Lay Assessments of Scientific Testimony.” Episteme 8 (2): 144–164. doi:10.3366/epi.2011.0013.
  • Anderson, Wendy G., Jenica W. Cimino, and Bernard Lo. 2013. “Seriously Ill Hospitalized Patients’ Perspectives on the Benefits and Harms of Two Models of Hospital CPR Discussions.” Patient Education and Counseling 93 (3): 633–640.
  • Bennett, Matthew. 2020. “Should I Do as I’m Told? Trust, Experts, and COVID-19.” Kennedy Institute of Ethics Journal 30 (3–4): 243–263. doi:10.1353/ken.2020.0014.
  • Brennan, Johnny. 2020. “Can Novices Trust Themselves to Choose Trustworthy Experts? Reasons for Reserved Optimism.” Social Epistemology 34 (3): 227–240.
  • Brown, Mark B. 2014. “Expertise and Deliberative Democracy.” In Deliberative Democracy: Issues and Cases, edited by Stephen Elstub and Michael McLaverty, 50–68. Edinburgh: Edinburgh University Press.
  • COVID Collaborative. 2020. “‘COVID Collaborative Survey: Coronavirus Vaccination Hesitancy in the Black and Latinx Communities.’” https://www.covidcollaborative.us/content/vaccine-treatments/coronavirus-vaccine-hesitancy-in-black-and-latinx-communities
  • Douglas, Heather. 2000. “Inductive Risk and Values in Science.” Philosophy of Science 67 (4): 559–579. doi:10.1086/392855.
  • Douglas, Heather. 2009. Science, Policy, and the Value-Free Ideal. Pittsburgh, PA: University of Pittsburgh Press.
  • Emanuel, Ezekiel J., and Linda L. Emanuel. 1992. “Four Models of the Physician-Patient Relationship.” JAMA 267 (16): 2221–2226.
  • Frongillo, Marissa, S. Feibelmann, J. Belkora, C. Lee, K. Sepucha. 2013. ”Is There Shared Decision Making When the Provider Makes a Recommendation?” Patient Education and Counseling 90 (1): 69–73. doi:10.1016/j.pec.2012.08.016.
  • Goldman, Alvin. 2001. “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63 (1): 85–110.
  • John, Stephen. 2018. “Epistemic Trust and the Ethics of Science Communication: Against Transparency, Openness, Sincerity and Honesty.” Social Epistemology 32 (2): 75–87. doi:10.1080/02691728.2017.1410864.
  • Lane, Melissa. 2014. “When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgement.” Episteme 11 (1): 97–118.
  • Markovits, Daniel. 2019. The Meritocracy Trap. New York: Penguin Press.
  • Nguyen, C. Thi. 2020. “Trust as an Unquestioning Attitude.” Oxford Studies in Epistemology. Oxford: Oxford University Press.
  • O’Neill, Onora. 2018. “Linking Trust to Trustworthiness.” International Journal of Philosophical Studies 26 (2): 293–300. doi:10.1080/09672559.2018.1454637.
  • Rolin, Kristina. 2020. ”Trust in Science.” In The Routledge Handbook of Trust and Philosophy, edited by Judith Simon. New York: Routledge.
  • Sandel, Michael. 2020. The Tyranny of Merit. London: Penguin Books.
  • Schroeder, S. Andrew. 2021. “Democratic Values: A Better Foundation for Public Trust in Science.” The British Journal for the Philosophy of Science 72 (2): 545–562.
  • Sherlock, Rebecca, F. Wood, N. Joseph‐Williams, D. Williams, J. Hyam, H. Sweetland, H. McGarrigle. 2019. ”“What Would You Recommend Doctor?”—Discourse Analysis of a Moment of Dissonance When Sharing Decisions in Clinical Consultations.” Health Expectations 22 (3): 547–554. doi:10.1111/hex.12881.
  • Yeomans Kinney, Anita, C. Anita, S. W. Vernon, and V. G. Vogel. 1998. “The Effect of Physician Recommendation on Enrolment in the Breast Cancer Chemoprevention Trial.” Preventive Medicine 27 (5): 713–719. doi:10.1006/pmed.1998.0349.