8,142
Views
0
CrossRef citations to date
0
Altmetric
Facets of Science: Values

The importance of values for science

ABSTRACT

This essay examines the important roles for values in science, from deciding which research projects are worth pursuing, to shaping good methodological approaches (including ethical concerns), to assessing the sufficiency of evidence for scientific claims. I highlight the necessity of social and ethical value judgements in science, particularly for producing properly responsible research. I then examine the implications of the need for values to inform scientific practice for public trust in science. I argue that values serve as a key basis for public trust in scientists, along with the presence of expertise and engagement in a well-functioning expert community, and that scientists should thus be more open about the values informing their work. This result holds whether the science at issue is a matter of consensus or still contested within the scientific community.

Introduction

It might seem obvious that science should aim at being free from all social and ethical values. Science aims at empirical truths, claims about the way the world is, and social and ethical values are about the way the world should be. Such values, it seems, would do nothing but distort science, diverting it away from an accurate understanding of the world towards our concerns, desires, wishes and dreams. Further, if such values were embedded in science, one might worry that the trust society places in science would be undermined. If science is not value-free and values are part science, then why should the public trust that science is a source of accurate knowledge about the world?

And yet, social and ethical values are essential to the practice of science. We need social and ethical values to direct our attention to the most significant and salient phenomena to study. We need social and ethical values to shape which methods are acceptable and sufficiently accurate for the pursuit of knowledge about those phenomena. Even in the very heart of science, when we make inferences based on the evidence we have collected, we need social and ethical values to help us decide when that evidence is sufficient for the claims we make.

This essay will explore the importance of values for science and show both why values are essential to science and how their influence must be limited for science to be an effective source for empirical knowledge. Just because values are essential for scientific practice does not mean values can play any role whatsoever in the pursuit of science. Further, I will describe how such values, in their limited roles, serve as a basis for societal trust in science. Contrary to the standard view, social and ethical values, when playing their proper role in science, are reasons for the public to trust, rather than distrust, science.

In practice, this means scientists need to be more open about the values that drive their work and that inform their decisions and judgements in scientific practice, just as scientists have been asked to be more open about the evidence they gather and use in their work. The essay will close with reflections on what an acceptance of values in science entails for the everyday practices of science.

Why values are essential to science

First, let me explain what I mean by social and ethical values. Broadly, social and ethical values concern how we should live as human beings, both how we should behave as individuals and how we should live together in human communities. Crucial ethical values include values regarding human autonomy, values that inform which risks are acceptable to impose on others and under what conditions, values that capture what is thought of as a life worth living, and values that shape how we interact with each other on a personal basis (what are our obligations as friends, family or neighbours?). Social values shape how we live together in societies, including norms of the public sphere, the value of political discourse and attending to social injustices. Values reflect what we care deeply about in our ethical and social lives. There is no sharp dividing line between social and ethical values (e.g. concerns about justice can be both ethical and social). The question for us is less about what is social, ethical or both, but how and when such values are relevant and important for scientific practice.

There are at least three decision points in science that require the consideration of social and ethical values. The first is in deciding which knowledge to pursue. The second is in deciding how to pursue it. The third is in deciding whether the evidence one has thus far gathered in one’s research is sufficient or not for supporting scientific inferences. There are other points in the scientific process that also require values. Some are discipline-specific (such as particular modelling assumptions that must be made). Some are pervasive but outside the scope of scientific research proper, such as in how someone might apply knowledge gained in the broader society. For the purposes of showing that values are an essential part of scientific practice, these three decision points suffice. There may well be additional moments when values are essential in your field. This essay should help you recognize those moments and help you reflect on them.Footnote1

First, we need values to help us decide what it is important to know. Pursuing research takes time and resources. We don’t want to waste our time and funding on projects that would not be significant (Kitcher Citation2001, Citation2004). Just because something is true and accurate does not mean it is worth the effort to develop knowledge. For example, one could count all the individual leaves on a tree or the specific temperature of a piece of metal every day for years, and one could thus produce an accurate empirical understanding of the variability of leaves on that particular tree or the temperature of that piece of metal, but why should one do that? Aiming at truth does not by itself tell us which truths are worth knowing. We need our social and ethical values to help shape that judgement.

This does not mean that our values alone dictate what is worth attempting to know. Such judgements must also be shaped by the existing knowledge in science and what kinds of projects seem tractable given that knowledge. Our existing knowledge tells us, for example, that it is a fool’s errand to try to produce a perpetual motion machine, or to develop a universal vaccine against all viruses (whereas a universal flu vaccine may well be within our grasp). Further, we have some sense of what our methodologies can and cannot achieve, and that also feeds into judgements about which projects are worth doing. Yet our societal challenges and our values regarding what we care about must also have a substantial influence on our choices of research endeavours. We research diseases, both causes and treatments, in order to alleviate human suffering from such diseases. We research animal behaviour because we want to understand better how to protect animal species from threats of extinction. We research new materials in order to build new technologies with new capacities (such as organic batteries), aiming at solving human problems (a shortage of rare earth metals and a need for battery energy storage). What counts as significant knowledge, worth pursuing, is necessarily and legitimately shaped by our social and ethical concerns.

Which methods we use when pursuing significant research is also importantly shaped by social and ethical values. Abuses of human subjects in the mid-twentieth century (in Nazi concentration camps, in the Tuskegee syphilis study, in numerous studies detailed by Beecher and in many examples that have only later come to light – see, e.g. Beecher Citation1966; Brandt Citation1978; Weindling Citation2004; Reverby Citation2011; Mosby Citation2013) led to a realization that scientists did not automatically act in the best interests of their human subjects, that all manner of horrific treatment was excused in the name of the pursuit of knowledge and that deeper ethical reflection, guidance and oversight was needed to protect human subjects. Similar realizations occurred regarding animal subjects, leading to regulation of animal subject research as well. That human and animal subjects are not simply tools to be used by scientists but beings requiring ethical protection properly restricts what scientists can acceptably do in the pursuit of science.

It is not just with beings that have a moral standing (such as humans and sentient animals) where social and ethical values shape scientific methods. Biosafety levels were developed in order to maintain adequate safety measures for doing work with possibly dangerous organisms. Such levels are meant to protect not just the scientists doing the work but also the broader society that would be harmed by lab releases. Other fields (such as those working with radioactive materials) must also grapple with what suffices to keep both researchers and the public safe from possible harms of pursuing research. And social and ethical values must be deployed in deciding whether methodologies that require some level of damage to artefacts or historical sites (e.g. in archaeological digs) are appropriate. One must decide that the knowledge to be gained is worth the destruction necessary for the deployment of the method, or whether alternative methods are preferable (if available). Very often, scientists must consult with and get consent from communities affected by their methods to get a proper assessment of the ethical acceptability of a methodological approach.

Indeed, in general, methodologies require both ethical and epistemic assessment. Is the method you are pursuing going to be able to substantially answer the question you are asking, and thus worth the time and effort of pursuit, in addition to any ethical harms or risks that come with the method? For example, is the sample size large enough so that the study will be sufficiently powered? If a larger sample size is not available, are the risks the research brings with it worth it? Does the demand for a larger sample change the risks? Detailed reflection on the methods from both an epistemic and moral perspective is generally needed to get the science right, and to ensure the science we do is worth doing. Spending time and resources on methods that will not provide sufficiently precise or accurate results is a waste of those resources and thus also a problem in our resource-constrained world.

Finally, when we have decided upon our projects and our methods, and gathered the data, we again need social and ethical values to complete scientific inference. This is the most pointed problem with the value-free ideal for science, because it is precisely at this moment in the scientific process that the value-free ideal is supposed to be the most important (Douglas Citation2009). Yet it is here that again we cannot proceed without social and ethical values.

To see why, consider that scientific claims always move from specific evidence to more general claims about the way the world is. Scientists study and gather data from instances (be it instances of chemical composition, human or animal behaviour, light from distant stars, temperature readings on this planet, etc.) and then make general claims based on that data. Sometimes those claims are narrow expansions, from ‘I have seen many instances of a particular behaviour for this group of entities’ to ‘This group of entities will always (or some percentage of time) behave in this way’. Sometimes those claims move from specific claims about the behaviour of entities to more theoretical claims about underlying causal structures. In either case, the scientist must make the judgement that the available evidence is sufficient to support the expansion from the specific instances to the general conclusion. The evidence is never complete for scientific claims, even for the narrow expansions (because we cannot measure all the instances), much less for the broader causal claims (which are usually much more interesting to us).

When is the evidence enough? One might think that one’s discipline gives the answer here – that whatever is considered enough within the disciplinary space answers this question. Different disciplines have different standards, worrying about making a claim prematurely more than waiting too long or worrying about missing an important claim rather than being wrong about a claim made. We can see this in different statistical standards. If a discipline demands that p < 0.05 in order for a result to be ‘significant’, that discipline has decided that there must be less than 5% chance of a false positive. If a discipline has a higher (e.g. p < 0.01) or lower (p < 0.10) standard (and fields do vary on their conventions), acceptable risks of false positives shift accordingly. The chance of a false negative depends on the power of the study, and few fields have set requirements for power. (Focusing on power rather than statistical significance is one of the advantages of registered reports.)

Does this take the issue of judging evidential sufficiency out of the hands of the scientist, preserving the value-free ideal? No, for several reasons. First, the scientist must decide that the current disciplinary standard is appropriate. Simply following the standards of one’s discipline automatically, and without critical thought, sets aside important responsibilities the scientist has to consider carefully why they are doing what they are doing. The history of science is littered with once held views on methodology and theory that have been rejected and one of the things that gets questioned is the adequacy of a particular methodological approach. Setting false positive standards is a methodological convention that can be open to critique. Further, the disciplinary standards themselves need a justification and sometimes are a matter of open debate within a discipline. For example, in the face of current concerns over replication failures, some in the social sciences have begun to call for a lower p-value before calling a result significant (Benjamin et al. Citation2018). Finally, even if the scientist chooses to accept and follow disciplinary standards, there are very often decisions about evidential adequacy that arise before statistical tests that must be addressed. There are decisions about data characterization, decisions about possible data outliers and decisions about when to end experiments that must be made in addition to decisions about statistical significance (Douglas Citation2000). Clear conventions across this range of decision points are not available, nor should such conventions be taken as incontrovertible.

Why do all these points of judgement require social and ethical values? Starting in the 1960s, some philosophers argued that decisions in the face of chronic evidential incompleteness (or underdetermination) could be made on the basis of values internal to science only, utilizing so-called ‘epistemic values’ such as explanatory power, predictive power, consistency with existing theory, broad scope, and simplicity (Levi Citation1960, Citation1962; Kuhn Citation1977; McMullin Citation1983). More recent discussions have shown that this strategy does not avoid the necessity of social and ethical values in assessments of evidential sufficiency (Laudan Citation2004; Steel Citation2010; Douglas Citation2013, Citation2017). Some of these values are central for assessing the acceptability of any scientific claim (is it internally consistent? Does it fit with the existing evidence?). Such concerns set minimal acceptability of scientific work. Other values guide judgements about how strong the existing evidence is. For example, does the new claim explain a complex set of evidence not previously explained together (a sense of simplicity)? Or does the new claim provide precise predictions where none were available previously (predictive power)? Such values help assess the strength of the available evidence but remain silent on whether the evidence is strong enough. Finally, some values help assess whether a particular hypothesis or theory is likely to be fruitful for future research. Does it have wide applicable scope, even if not yet tested? Does it have the potential to generate new testable predictions? Again, this is very helpful for scientific practice but still silent on whether the evidence available right now is sufficient for a claim or whether we should view the evidence (incomplete as it is) as strong enough.

Questions of whether the evidence is strong enough require consideration of the risks of error in accepting a claim too soon and in rejecting a claim too long. Those risks of error must include the risks to the broader society in which science functions, as well as risks to the scientific enterprise. Of course, inferring too soon or waiting too long presents risks to science. Scientists who wait too long to accept evidence as sufficiently strong for a claim risk being left behind by their field and risk losing proper credit for their discoveries. Scientists who infer too soon risk their reputations for being sufficiently rigorous and risk leading the scientific community in the wrong direction. But if scientists only consider risks to themselves and the scientific community, they are ignoring their responsibilities to the broader society in which they work, acting as if only science mattered. It is this stance, acting as if only science mattered, that led to the deeply unethical treatment of human and animal subjects in the twentieth century. Scientists must embrace concern for the broader society and this includes responsibilities to consider the risks of their work, both in making methodological choices and in making inferences to the broader society. Embracing this responsibility requires social and ethical values.

To see why, consider a scientist trying to decide whether the evidence is strong enough to support a claim. Accepting the evidence as sufficiently strong prematurely risks making a claim too soon; waiting until the evidence gets stronger risks waiting too long before making a claim. Such risks pose costs not just to scientists and the scientific community, but to the broader society, particularly when claims are directly relevant to the broader society. Consider evidence a scientist may gather suggesting a new virus with pandemic potential is emerging. Waiting too long increases the likelihood of the pandemic occurring unchecked (as happened with COVID). Accepting too soon (on too flimsy an evidential basis) creates risks of unneeded societal restrictions. When is the evidence strong enough? While epistemic values can help us assess whether a claim is minimally sound, assess the available evidential strength and assess potential fruitfulness for future research, they cannot tell us when the evidence we have at the moment is sufficiently strong for the claim. Considering the broader societal impacts and the values associated with those impacts is needed.

Another example shows the importance of social and ethical values. Consider a study that shows some possibility of success for a new disease treatment. However, the treatment has substantial side effects and the sample size was small, even if the outcomes appeared to be clearly improved by the treatment. Is this evidence strong enough for widening use of the treatment? Certainly not on its own – studies with larger sample sizes are needed, along with good controls. However, if the disease is deadly with no other treatment options available, experimental use of the treatment might well be warranted. Such decisions are shaped by the social and ethical values at stake – values which help us assess the risks of harm for pushing out a treatment with possible side-effects and the risks of forgoing benefits of treatment for patients without other options. Without these values, it would not be possible to assess and work within the tensions involved. A purely epistemic approach would call for us to simply wait for overwhelming evidence, regardless of the costs to people with the disease. Similar issues arise in any field with public relevance.

In short, there are at least three vitally important locations for social and ethical values to influence the practice of science: (1) in deciding which research to pursue; (2) in deciding which methodologies to employ; and (3) in deciding whether the evidence we have is sufficient for a scientific claim. Many other decision points in science may also require the use of values, such as deciding which terms to use for scientific entities and which assumptions to deploy in modelling (as models always require some idealizations) (Elliott Citation2017). And social and ethical values are also needed to decide how and when to communicate research to decision-makers and the public as well as how to apply scientific findings in broader practice. Science is, thus, legitimately and properly a value-saturated endeavour.

Even as these decision points in science show the importance of social and ethical values in science, it is also crucial to note that social and ethical values can be abused in science – used in ways that damage the ability of science to pursue inquiry in responsible ways. This can happen when social and ethical values become reasons to cloud serious and proper inquiry. For example, one could choose a research project not because one wants to understand something significant but in order to obfuscate the understanding of phenomena. This was an approach used by tobacco companies in order to generate data on alternative causes of disease clearly linked to smoking, so that they could confuse the medical community and the general public (Oreskes and Conway Citation2010). Or one could pick methodologies virtually guaranteed to produce one’s desired outcomes. For example, in studying hormonally active pollutants, one could use an animal model known to be hormonally insensitive and thus produce the desired negative results (Wilholt Citation2009). Finally, one could demand ever increasingly strong evidence in order to avoid coming to an undesired conclusion. In each of these instances, the values are misused in science in order to undermine proper inquiry rather than to guide it.

It was worries about such abuses of values in science that helped to solidify the value-free ideal for science. Yet, the value-free ideal provides inappropriate guidance for the practice of science, as social and ethical values are central to the responsible conduct of science. Instead, we must uphold the value of inquiry itself in addition to the needed epistemic, social and ethical values. Although the value of inquiry is not an overriding value (if it were, human subject abuses could be worth knowledge gained), the value of inquiry should not be subverted covertly. If social and ethical values work to predetermine results of scientific investigation or to prevent undesired conclusions from being drawn, the value of inquiry has been subverted. Note that in the cases of abuse in the previous paragraph, the values are deployed in a hidden manner – the researchers are not open about their aims and the values involved (to confuse the strong evidence on the harmful effects of tobacco, to ensure one gets desired predetermined results, to avoid arriving at unwanted conclusions). Hiding the actual values and merely pretending to pursue inquiry is what harms science, not the presence of values in science generally.

The importance and necessity of values in the scientific process, from deciding which knowledge to pursue, to how to pursue it, to when one has sufficient evidence for a claim, means that value-free science is improperly guided and inadequate science. There is no inherent contradiction between using values to guide scientific inquiry and doing science properly. One must value the process and practice of inquiry as well and doing so should help protect one from abusing values in science. A proper understanding of the role of values in science is crucial to doing science well, and of value to the public. The question then becomes, what should this mean for the relationship between science and the public?

Science and the public

Since WWII, the trust the public has placed in science has been justified in terms of both the instrumental success of science (its ability to enable us to intervene in the world successfully) and its freedom from social and ethical values (reflecting concerns described above arising from when values are abused in science). In the previous section, I showed how important values are for the proper pursuit of inquiry. One can add to that list of key judgements the assessment of instrumental success – we want science to enable us to do positive things in society, not destructive things. This is why, for example, we have clear bans on research into new biological and chemical weapons and are considering a ban on autonomous weapons. Even the assessment of the instrumental success of science depends on values – does the technical success science provides serve the good of society? This means that values cannot be separated from reasons to trust science – because of both the need to assess instrumental success and the nature of scientific practice (as elaborated above).

The importance of social and ethical values in science means we need to rethink the presumed basis for public trust in science and in scientists. In this section, I will argue that there are several important bases for trust in science and among them are social and ethical values. The ideal of the purely value-free, cold and detached scientist undermines public trust in science, rather than bolstering it. Implications for science communication practices will also be discussed.

Why should the public trust science? One basis is the presence of expertise among scientists. Expertise consists of the ability to make judgements quickly in a complex terrain, to see what is important and what is not and what range of issues remain open. Expert judgement takes years to develop and is domain-specific. Most people have expertise in some area, from how to navigate their own local traffic patterns, to what grows best in their gardens, to expertise needed for their work. Scientific expertise arises from particular training in a field plus ongoing practice in pursuing inquiry in that field. Expertise must be continually honed against other experts and developed with practice in the world.

The presence of expertise cannot always be easily assessed. Some expertise can be assessed by whether expert judgements lead to success in practice. Expert chess players win games; expert gardeners successfully grow desired plants; expert chefs produce delicious food. One does not need to be an expert to make judgements about whether one is in the presence of this kind of potent expertise; it is easy to detect the presence of expertise in such cases. However, much valuable expertise does not afford easy measures of success. Some expertise involves judgements with lots of possible confounders (e.g. complex systems that cannot be isolated in practice, such as public health) and/or inter-individual variation (e.g. whether a particular medical treatment will work for you). Some expertise involves judgements concerning systems where the accuracy of judgements can take decades to assess. Climate modellers, for example, do not have access to quick measures of their expertise. Whether their models are predictively accurate will take years if not decades to assess and confounders could be present (even as modellers continually seek such confounders as they improve and test their models). Instead of looking to raw success, such expertise can be assessed on the ability of experts to explain the basis of their judgements. We need experts grappling with these kinds of systems (whether social systems, climate systems, ecological systems, health systems or some combination) to explain why they think what they think, as short-term success measures are elusive.

Experts working with such systems should think about how they would explain their judgements to non-experts. The purpose of such explanations is not to impart the fullness of expertise to the non-expert – no single communication can achieve this. Rather, the purpose of the explanation is to demonstrate the nature of expert judgement, what is a central consideration for the expert and what has been ruled out. It is to give a rough map to the terrain of expertise, not so the non-expert can navigate the terrain themselves, but so they understand how rich and complex the terrain actually is. Talking to an expert generally reveals to the non-expert that there are considerations the non-expert has not thought of at all, and thus shows the importance of the expert as a guide.

In sum, the first basis of trust in expertise is in the presence of expertise made visible, whether it is assessed through short-term measures of success or expert explanations of how their judgement works or some combination of the two.

A second crucial basis for trust arises from the social community of experts and how it functions to generate knowledge (Oreskes Citation2019). Important attention has been given to the social factors operating in expert communities and which social factors aid in the production of knowledge (and the lack of which impedes knowledge production) (Longino Citation1990, Citation2002). Scientists must communicate their findings with each other so that they can then engage in critical discourse, disputing each others findings, critiquing and refining methods and developing alternative hypotheses. The more socially diverse the scientific expert community engaged in such practice, the better range of divergent perspectives will be present in the scientific work and the better range of criticisms and alternatives will be offered. Scientific communities thus must foster diversity among their members (however usefully defined for knowledge production) and encourage criticism as a normal part of scientific practice. Publication peer review (pre and/or post publication), Q&A sessions at conferences and debates among scientists in publications and talks are essential for knowledge production in a scientific community. Scientists must also hold each other accountable to responding to criticisms, which can include acknowledging weaknesses in work, rebutting concerns and/or altering practices going forward.

Proper engagement with a scientific community in these social epistemic practices of debate and discourse is a key indicator of whether an expert actually has scientific expertise. Failure to participate in the conferences and publication venues central to a field (especially peer reviewed, whether pre or post publication) is a reason to distrust a supposed expert – they are not properly engaging in an epistemic community to hone their work. This is the case whether one has readily assessable expertise or not. Supposed chess masters who do not engage with and beat other chess masters are not expert players after all. Supposed medical experts whose patients do worse than other medical experts should have their expertise doubted. It is only against the backdrop of a functioning expert community that we can assess experts as having expertise. Being part of a well-functioning expert community is central to assessing whether an expert is trustworthy. The more diverse and interactive (e.g. responsive to criticism) an expert community is with which an expert works, the more trustworthy the expert.

This means that sharing aspects of how the expert community works with the broader public is an important way to bolster trust in that community. Expert communities should reveal when there is important debate and show the contours of the debate (what is the debate about and what is it not about). If the debate becomes settled, the public should have some sense of why. The public does not need to know all the details but that the debate has occurred and, in some cases, been settled is important for the public to know. Displaying debate makes clear that the expert community is not some sham of an operation presenting a fake united front. Discussing the presence of disagreement and describing how disagreement has been resolved or why it is ongoing is thus crucial for the public to trust that experts are doing their epistemic work properly, that expertise is being properly honed, and that an appropriate range of issues are being addressed. And the more diverse the expert community is, particularly for expertise that is not readily assessable in terms of raw success, the more confidence the public can have that central concerns have been raised and properly debated.

The third crucial basis for trust in science is that the science was guided by (but not determined by) shared social and ethical values. The importance of social and ethical values for properly done and properly responsible and trustworthy science should be clear from the first part of the essay. Scientific research that is not framed by appropriate values will not look into crucial phenomena or important causal factors. Scientific methods that are not guided by social and ethical values do not produce knowledge that assesses the societally important aspects of a problem and can be deeply harmful to members of the public and public trust. And scientific inference that does not weigh evidential sufficiency with apt values does not make trustworthy claims. The importance for values in scientific practice means that having shared values guide those judgements in practice and then communicating that those values guided judgements (even sketching how they guided judgements), serves as a reason for members of the public (particularly those who embrace the crucial values) to trust the science. The public should trust scientific experts that make judgements as the public would, if they had the expertise.

It is of course crucial that values do not determine what the science produces, as noted in the first section. For social and ethical values to pre-determine scientific results would completely undermine the trustworthiness of the science. The value of inquiry, of genuinely trying to find out what the accurate picture of the world is, should not be subverted. This is partly why having a scientific expert engage with a diverse scientific community, encompassing a wide range of perspectives, experiences, and values, remains a crucial basis for trust. Such a diverse community will call out a researcher for holding to views dogmatically (for value-based reasons) or having inappropriately high or low standards for evidential sufficiency. Value disagreements can be a key source of expert disagreement, as different experts assess the sufficiency of evidence differently, for value-based reasons (Douglas Citation2000). This disagreement is crucial to generating robust debate, and for detecting inappropriately functioning values in science. Experts engaged in such debate and both generating and responding properly to criticism can be assessed on whether they are utilizing values properly. In short, shared values playing a proper (and not an improper) role in science is one of the bases for trust in science.

The fact that values (as well as methodological preferences and theoretical views) can drive scientific expert disagreement is both a reason to ensure appropriate diversity in science and a reason why consensus, when formed, is particularly trustworthy. Much discussion of trust in science has centred on consensus (Oreskes Citation2004, Citation2019; Anderson Citation2011) but, as many of these discussions note, the mere presence of consensus is insufficient for trust in science. Consensus must be (1) properly formed (through investigation, debate, pursuit, and rejection of alternative accounts, and resulting genuine agreement) and (2) formed among an appropriately diverse set of experts in order to count as trustworthy. Both conditions must be met and, in these cases, trust rests on the conditions already described above: trust arises from (1) the presence of expertise, (2) the sound social epistemic practices in the scientific community, and (3) the fact that some of the experts involved in the debate have values that reflect one’s own (an appropriately diverse set of experts will usually include some experts who share the member of the public’s values). Trustworthy consensus thus derives its trustworthiness from the bases discussed above. When a scientific community has members who hold similar values to one’s own and that community has reached a consensus on a particular issue, one’s values are encompassed by the consensus.

It is also important, given that we do not want to artificially push for consensus in science, that we not make the presence of consensus a precondition for trustworthy scientific expertise. The public needs to make judgements of trustworthiness in the absence of consensus, as well as in its presence. When consensus is present, the bases described above provide the key guidance on whether the consensus was properly formed and thus trustworthy. Was it formed among genuine experts, functioning well epistemically (debating issues properly), among a diverse set of experts (so that one’s values were included)? When consensus is absent, members of the public should find genuine experts, engaged in a well-functioning and diverse epistemic community, who share their values and trust the views of those experts, even if consensus has not yet been reached.

Consensus is thus not essential for trust. What is essential is the presence of expertise, the presence of a well-functioning community of experts (including adequate diversity and good social epistemic practices) and the presence of shared values. While this is more complex to assess than the presence or absence of consensus, the mere presence or absence of consensus is not sufficient for trust on its own (Anderson Citation2011; Oreskes Citation2019). Only a properly formed consensus among a sufficiently diverse set of experts is trustworthy. In either case, the functioning of the community of experts must be evaluated by the non-expert.

In practice, this means that expert communities should be open about their debates and disagreements. That they are having such disagreements might be frustrating to those who want a clear answer now, but they are a key signal of trustworthiness. Artificial agreement or coerced consensus are a reason for non-experts to distrust an expert community. Scientific experts should also be open with each other and the broader public about the value commitments that informed key judgements in the research process – about the values that framed the research project, that shaped the methodological choices, that informed the assessment of evidence, and any other key judgements. Such values are a bridge to trust for the public (Hicks and Lobato Citation2022). If scientists are open about them, the public can more readily find experts who share their values and in whom they can trust.

Conclusion

In sum, a facade of cool detachment and aloofness from the concerns of the broader society does not help generate trust from the public. Passionate, engaged and rigorous expertise is far more likely to produce trusting relationships. Sharing your values will help the non-expert public identify you as an expert in which they can trust – as an expert who would make judgements as they would, if they had your expertise. Not all members of the public will share your values, including the values that guide your scientific work. But for those members of the public who do not, your work may not be the most trustworthy – you may not be including key issues of concern or they might weigh the sufficiency of evidence differently were they in your shoes. Other experts may well be more trustworthy for them. This is not an insult to your scientific integrity, but rather a reasonable inference given the importance of social and ethical values for science. Even for members of the public who do not share your values, your presence in the expert community is invaluable, because you help provide the diversity of perspectives and richness of critique that makes the work of that community trustworthy generally.

So explain to the public why you make the judgements you make (although not all the details – that would overwhelm them). Display key aspects of expert debate and disagreement. Be open about the values informing your work. Be human. That will make your work trustworthy and ultimately trusted.

Further Reading

The most recent overviews on values in science are Douglas (Citation2021) and Elliott (Citation2022). Oreskes (Citation2019) provides a good introduction to the discussions of trust in science. Douglas (Citation2021) also discusses science communication.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Heather Douglas

Heather Douglas is professor of philosophy at Michigan State University. Her research focuses on the relationship between science and democracy, including the role of social and ethical values in science, the nature of scientists’ responsibility in and for science, and science-policy interfaces such as science advising, science funding, responsible research oversight/cultivation and science communication. She is the author of dozens of articles and essays, several edited collections and monographs. In 2016, she was elected as a Fellow of the American Association for the Advancement of Science.

Notes

1 For more details, see Douglas (Citation2016, Citation2021) and Elliott (Citation2017, Citation2022).

References

  • Anderson, Elizabeth. 2011. “Democracy, Public Policy, and Lay Assessments of Scientific Testimony.” Episteme 8 (2): 144–164. doi:10.3366/epi.2011.0013.
  • Beecher, Henry K. 1966. “Ethics and Clinical Research.” The New England Journal of Medicine 274 (24): 1354–1360. doi:10.1056/NEJM196606162742405.
  • Benjamin, Daniel J., James O. Berger, Magnus Johannesson, Brian A. Nosek, E.-J. Wagenmakers, Richard Berk, Kenneth A. Bollen, et al. 2018. “Redefine Statistical Significance.” Nature Human Behaviour 2: 6–10. doi:10.1038/s41562-017-0189-z.
  • Brandt, Allan M. 1978. “Racism and Research: The Case of the Tuskegee Syphilis Study.” The Hastings Center Report 8 (6): 21–29. doi:10.2307/3561468
  • Douglas, Heather. 2000. “Inductive Risk and Values in Science.” Philosophy of Science 67 (4): 559–579. doi:10.1086/392855.
  • Douglas, Heather. 2009. Science, Policy, and the Value-Free Ideal. Pittsburgh: University of Pittsburgh Press.
  • Douglas, Heather. 2013. “The Value of Cognitive Values.” Philosophy of Science 80 (5): 796–806. doi:10.1086/673716.
  • Douglas, Heather. 2016. “Values in Science.” In The Oxford Handbook of Philosophy of Science, edited by Paul Humphreys, 609–630. New York: Oxford University Press.
  • Douglas, Heather. 2017. “Why Inductive Risk Requires Values in Science.” In Current Controversies in Values and Science, edited by Kevin Elliott, and Daniel Steel, 81–93. New York: Routledge.
  • Douglas, Heather. 2021. The Rightful Place of Science: Science, Values, and Democracy: The 2016 Descartes Lectures. Tempe, AZ: Consortium for Science, Policy & Outcomes.
  • Elliott, Kevin. 2017. A Tapestry of Values: An Introduction to Values in Science. New York: Oxford University Press.
  • Elliott, Kevin. 2022. Values in Science. Cambridge: Cambridge University Press.
  • Hicks, Daniel J., and Emilio J. C. Lobato. 2022. “Values Disclosures and Trust in Science: A Replication Study.” Frontiers in Communication 7: 1017362. doi:10.3389/fcomm.2022.1017362.
  • Kitcher, Philip. 2001. Science, Truth, and Democracy. New York: Oxford University Press.
  • Kitcher, Philip. 2004. “Responsible Biology.” BioScience 54 (4): 331–336. doi:10.1641/0006-3568(2004)054[0331:RB]2.0.CO;2.
  • Kuhn, Thomas S. 1977. “Objectivity, Value Judgment, and Theory Choice.” In The Essential Tension, 320–339. Chicago, IL: University of Chicago Press.
  • Laudan, Larry. 2004. “The Epistemic, the Cognitive, and the Social.” In Science, Values, and Objectivity, edited by Peter Machamer, and Gereon Wlolters, 14–23. Pittsburgh and Konstanz: University of Pittsburgh / Universitätsverlag Konstanz.
  • Levi, Isaac. 1960. “Must the Scientist Make Value Judgements?” The Journal of Philosophy 57: 345–357. doi:10.2307/2023504.
  • Levi, Isaac. 1962. “On the Seriousness of Mistakes.” Philosophy of Science 29 (1): 47–65. doi:10.1086/287841.
  • Longino, Helen. 1990. Science as Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton: Princeton University Press.
  • Longino, Helen. 2002. The Fate of Knowledge. Princeton: Princeton University Press.
  • McMullin, Ernan. 1983. “Values in Science.” In PSA: Proceedings of the Biennial Meeting of the 1982 Philosophy of Science Association vol. 1, edited by P. D. Asquith and Thomas Nickles, 3–28. East Lansing, MI: Philosophy of Science Association.
  • Mosby, Ian. 2013. “Administering Colonial Science: Nutrition Research and Human Biomedical Experimentation in Aboriginal Communities and Residential Schools, 1942–1952.” Histoire Sociale/Social History 46 (1): 145–172. doi:10.1353/his.2013.0015.
  • Oreskes, Naomi. 2004. “The Scientific Consensus on Climate Change.” Science 306 (5702): 1686–1686. doi:10.1126/science.1103618.
  • Oreskes, Naomi. 2019. Why Trust Science? Princeton: Princeton University Press.
  • Oreskes, Naomi, and Erik Conway. 2010. Merchants of Doubt. New York: Bloomsbury Press.
  • Reverby, Susan M. 2011. “‘Normal Exposure’ and Inoculation Syphilis: A PHS ‘Tuskegee’ Doctor in Guatemala, 1946–1948.” Journal of Policy History 23 (1): 6–28. doi:10.1017/S0898030610000291.
  • Steel, Daniel. 2010. “Epistemic Values and the Argument from Inductive Risk.” Philosophy of Science 77: 14–34. doi:10.1086/650206.
  • Weindling, Paul J. 2004. Nazi Medicine and the Nuremberg Trials: From Medical war Crimes to Informed Consent. Basingstoke: Palgrave Macmillan.
  • Wilholt, Torsten. 2009. “Bias and Values in Scientific Research.” Studies in History and Philosophy of Science Part A 40 (1): 92–101. doi:10.1016/j.shpsa.2008.12.005.