2,627
Views
4
CrossRef citations to date
0
Altmetric
Perceptions of Science: Public Trust

Science communication and public trust in science

ORCID Icon

ABSTRACT

There are many ways that trust plays a crucial role in science, both between researchers and between researchers and various communities impacted by their research. Scientific practices can operate in ways that either facilitate, or undermine, trust in science. This contribution will examine the role of science communication in facilitating (or undermining) public trust in science and science-based policy recommendations. This will be done by looking at some potential failures in the public communication of science during the COVID-19 pandemic that have the potential to undermine trust in scientists. Finally, I draw out lessons that this case has for how we might improve science communication practices.

1. Introduction

Trust is important in multiple ways to interactions between science and societies. Policymakers and government officials must trust researchers to provide reliable data that can guide decisions about how to address emerging public threats. Laypersons must often rely on experts to produce and share reliable knowledge that can guide decisions about which actions to take, what treatments or products to use, or which science-related policies to support. Scientists must be able to trust each other when their own work relies on the work of others. Scientific knowledge is becoming increasingly complex and specialized, with greater reliance on intricate technological devices and cognitive tools such as models (de Melo-Martín and Intemann Citation2018). This makes it increasingly difficult for those who are not experts in a particular field – even those with some scientific training – to fully understand the state of knowledge within some area of science. Thus, laypersons, policymakers, and even other scientists must trust scientific experts to provide information and guidance that can help rationally guide their decisions.

Although trust in experts is often necessary, trust is risky and can make us vulnerable (Baier Citation1986). Ideally, we want to promote trust that is well-placed, or warranted. That is, we want to ensure that laypersons have good reasons for trusting scientists or scientific institutions. While there is considerable debate about what exactly trust involves, there is wide recognition that it involves both epistemological and ethical dimensions (Baier Citation1986; Hardwig Citation1991; Mayer, Davis, and Schoorman Citation1995; Hardin Citation2006; Scheman Citation2011). Trusting particular scientists requires believing that they are competent and able with respect to a domain of expertise (Mayer, Davis, and Schoorman Citation1995, Hendriks, Kienhues, and Bromme Citation2015) or epistemically reliable (Wilholt Citation2013). It also requires believing that scientists are honest (Baier Citation1986) or have integrity (Mayer, Davis, and Schoorman Citation1995) or adherence to scientific standards (Hendriks, Kienhues, and Bromme Citation2015). Finally, it requires good intentions (Mayer, Davis, and Schoorman Citation1995) or care about wellbeing of the publics they serve (de Melo-Martín and Intemann Citation2018; Goldenberg Citation2021), including having the right attitude about the possible epistemic consequences of their work (Wilholt Citation2013). In the previous chapter, Goldenberg summarizes these criteria in the following way,

[t]he public will trust advice and information from scientific experts if the individual, group, or the institutions the scientists represent are perceived as (i) epistemically competent, i.e. they are in a position to know, (ii) morally reliable, i.e. they are disposed to tell the truth, and (iii) they work in the public interest. (Goldenberg, this special isssue)

Scientific practices play a significant role in mediating the public’s perception of each of these criteria. For example, scientific practices or regulations that protect the epistemic integrity of science can influence public confidence about whether that research is epistemically competent. Scientific organizations and institutions can adopt policies that promote (or fail to promote) honest and independent research directed towards the interests of the public (as opposed to the interests of individual researchers, institutions, or corporations). Scientific funding agencies might also adopt practices that give weight to research in the public interest or research that addresses some of the most pressing problems societies face, so as to facilitate that third criterion for trust. This contribution will examine the role of science communication in facilitating (or undermining) public trust in science and science-based policy recommendations. I will begin by first considering the nature and goals of science communication, in order to show how these goals can be interconnected with public trust of scientists and scientific institutions. Using examples from the COVID-19 pandemic, I identify various communication errors that are likely to erode trust even though they may promote other goals of science communication. While public opinion polling in some countries suggests that overall trust in science is quite high and in some cases may have increased during the pandemic (e.g. Bromme et al. Citation2022), there were still communication errors that have the potential to erode warranted trust in science. Understanding how this happened is important to avoiding these problems in the future and facilitating, rather than undermining, warranted trust. I conclude with some lessons on how we might improve science communication practices.

2. The nature and goals of science communication

Science communication, broadly construed, involves many different communicators engaging with many different audiences. While individual scientists may serve as science communicators, scientific organizations, academic institutions, museums, research groups, healthcare professionals, and science journalists among others may also engage in science communication. Communication can be directed towards scientists inside or outside a research area, others within an institution, funding agencies, regulatory agencies, policymakers, the public at large, or with particular segments of the public (e.g. those most at-risk, those in a particular geographical area, or those who share a particular culture, language, or set of values). Because science communication involves multiple types of both communicators and audiences, the goals of science communication may depend on the context (Intemann Citation2020).

Broadly speaking, one set of goals concerns empowering decision-makers (individual laypersons, policymakers, funding agencies, or even other scientists) to make well-grounded decisions with respect to science and technology (Priest Citation2013). In this context, providing accurate information is an important aim. Accurate information is necessary for grounding decisions about what research to fund and pursue, whether certain interventions are safe and effective, what personal choices and actions one might take (e.g. with respect to one’s health), and what policies might be needed. Having accurate information increases autonomy in decision-making and provides a basis for making informed, evidence-based, decisions. Communicators who do not provide accurate information are less likely to be judged to be epistemically competent, and therefore, not found trustworthy.

Yet, for many audiences, accurate or reliable information is not all that is needed. There are several ways in which ‘accurate’ information can nonetheless fail to result in well-grounded decision-making. When communicating with lay persons, a further goal is to present information in a manner that is widely accessible and understandable. Focusing on accuracy may lead science communicators to precise, subtle, and overly technical language that may not convey the significance of what is being communicated in a way that could rationally guide actions (McKaughan and Elliott Citation2013; McKaughan and Elliott Citation2018, 206–207). This is problematic when communicating with lay persons, who presumably also want information that is accessible or understandable. Being presented with, for example, a ton of technical genetic information may not help patients understand what their risks are, how this information might impact aspects of their lives that they care about, or what they ought to do. Thus, understanding can also be an important goal.

In addition, decisionmakers are not just concerned about what we know so far, or the state of a particular field of science or technological innovation. They also want to be able to make predictions about what will happen in the future (given what we know so far). Investors and funding agencies need to predict the most promising areas or research strategies to devote limited research funding. Policymakers need to predict or anticipate what regulations might be required to minimize risks, maximize benefits, or ensure that the benefits of a science or technological product are fairly distributed. Community members need to be able to predict their individual risks in order to make decisions about their own behaviours or treatments. Thus, the goal of some scientific communication, by its very nature, is aimed at enabling reliable predictions of various sorts. While this obviously requires being given accurate and accessible information, it also requires information that is relevant to their decision-making (Rowan Citation1991; Weingart, Engels, and Pansegrau Citation2000). Not all ‘accurate’ information provides the sort of relevant information that enables reliable predictions. For instance, during the COVID-19 pandemic many decisionmakers needed data to evaluate the risks of developing severe COVID that would require hospitalization. As surges in cases were predicted, policymakers and healthcare administrators needed predicting their needs and capacities. Individuals needed to assess their own risks of being hospitalized if infected. Yet, in the US, for example, the data that was reported on hospitalizations tracked the number of those hospitalized who had tested positive (or in some cases were presumed positive) for SARS-Cov2 (Galaitsi et al. Citation2021). The data did not distinguish between patients that were hospitalized because they had severe COVID, and those that were in the hospital for some other reason (and then also tested positive). Even though the data might have accurately captured how many people in the hospital have COVID-19, the lack of relevant information (data on how many patients were hospitalized due to severe COVID) made it harder to assess risks. Thus, a third goal of science communication is to present decisionmakers with information that is relevant to the kinds of decisions they must make.

In some contexts, the goals of communication are not just to inform or empower decisionmakers, but to influence them. To protect public health there is often a need to motivate action. The goal is to convince individuals to engage in certain behaviours or urge policy makers to enact certain policies. This is particularly the case in communication about infectious diseases, such as COVID-19, HIV, HPV, or other communicable diseases. In this context, the behaviours that individuals adopt can impact the health of others or public health in general. Thus, motivating individual action can be vital for protecting public health at the community-level. Motivating regulatory action can also be crucial for protecting communities against toxins or substances that may have serious health consequences, particularly those in widespread use, such as lead in paint or bisphenol A in plastics.

In some cases, motivating action may not be difficult. For example, providing people with information that regular proper handwashing can reduce their risks of becoming infected with COVID-19 (and other bacterial and viral infections), may be fairly motivating because handwashing does not require much sacrifice or inconvenience and the multiple benefits are obvious. When the action being motivated requires some burden or personal sacrifice, however, science communicators may have to provide more explanation of why this action is important or beneficial or ought to be done.

In addition, some science communication aims to generate interest in and enthusiasm about science, a particular area of research, a new treatment, or an emerging technology. The development of science or technology requires that new researchers be drawn to a particular field and invest their time. It also requires resources for carrying out the research. Attracting both talent and resources thus necessitates generating excitement and enthusiasm about particular areas of science or technology (Master and Resnik Citation2013, 2; Master and Ozdemir Citation2008; Schrage Citation2004). Generating interest in science is also important to increase and maintain scientific literacy within communities. We want a populace that is interested in being scientifically informed, not only so they can make responsible decisions themselves, but also so that they understand the options facing policymakers and why some may be more justified than others. Generating enthusiasm and provoking interest in science helps engage laypersons to be scientifically informed and involved.

Finally, in order for science communication to be effective in its other goals, such as empowering decisionmakers, motivating action, or generating enthusiasm) it must simultaneously promote another goal: facilitating trust. Decisionmakers must conclude that the information (and those who generate it) are trustworthy (Cunningham-Burley Citation2006). Moreover, as we have seen, communicating trustworthiness will require convincing decisionmakers not only that the communicators (and those they represent) are epistemically competent, but also that they are morally reliable and serve the public interest. Thus, in many contexts, a goal of science communication is to facilitate trust between scientists and various publics, which may require being attentive to those aspects of science that might bear on epistemic competency, moral reliability, and the public interest. For example, in addition to communicating new knowledge or information, facilitating trust can require communicating about the ways in which that knowledge was produced and why those methods are reliable. It may also require transparency about the limitations and uncertainties at stake, demonstrating consistency, sharing data, or ensuring that scientific results are communicated broadly to diverse audiences (Nisbet and Scheufele Citation2009; Weingart and Guenther Citation2016). To facilitate trust, communicators must engage with their audience in ways that show they care about their interests, welfare, and values. In short, communicating about more than just the scientific knowledge, but also about how science works, and why the science (or scientist) in a particular instance meets standards of competency, reliability, and the public interest.

Thus, there are several goals of science communication, some of which may be more or less important depending on audience and context. To summarize, these are: (1) accuracy, (2) understanding, (3) predictive relevancy, (4) motivating action, (5) generating interest, and (6) facilitating trust. While this is not exhaustive of the possible goals of science communication, it demonstrates that the goals of science communication are multiple and interrelated. Conveying accurate information may also be important to predictive relevancy and facilitating trust. But conveying accurate information may not be sufficient for achieving the other goals (McKaughan and Elliott Citation2018). Indeed, the goals of science communication can conflict. Accuracy may be at odds with understandability and may not be sufficient to generate excitement. Generating excitement can be contrary to facilitating trust when communication is sensationalized and, in some cases, may be in tension with accuracy. Even when it is important to generate excitement and interest in research and new technologies, there is recognition that there are dangers if there are no tangible results for the public that align with their expectations (Mason and Manzotti Citation2009). There may also be tensions between motivating action and facilitating trust in cases where science communicators are perceived to have an agenda for action that does not align with what some people believe is in the public interest. In the next section, we will examine several potential errors in science communication during the COVID-19 pandemic that illustrate these tensions in practice, focusing on those that tend to have the effect of undermining trust.

3. Errors of communication that undermine trust

On 11 March 2020, the World Health Organization formally declared the novel coronavirus (SARS Cov2) crisis a pandemic. The pandemic posed a variety of challenges for both scientists and policymakers. Because it was a novel coronavirus, there were significant knowledge gaps and uncertainties about how the virus was transmitted, how contagious it was, how fatal it was, who was most at risk, and what behaviours, actions, or policies might be the most effective in stopping or slowing the spread. This necessitated a lot of coordinated investigation, which takes time. Yet because the virus also appeared to transmit rapidly through communities around the globe, overwhelming healthcare systems and resulting in significant fatalities, urgent decisions and actions were also required. The COVID-19 pandemic thus presented unique challenges for science communicators as well: to communicate recommendations given the current base of knowledge and evidence, while also recognizing that the evidence and recommendations might change. While it is no surprise that mistakes were made given the unprecedented and complex challenges for scientists, public health experts, and science communicators, identifying and examining failures of communication during the pandemic can help us in developing research and communication strategies to avoid these problems and better facilitate trust in the future.

3.1. Mixed messaging

In some countries, members of the public reported missed messages from public health officials. For example, survey research conducted early in the pandemic showed that nearly three-quarters of the public in the US believed that they had been given conflicting information regarding the COVID-19 pandemic (Nagler et al. Citation2020). While conflicting messages were more often attributed to politicians, they were also frequently attributed to public health experts. Indeed, mixed messaging was found to be a significant problem not only in the US, but also throughout Europe (Ruiu Citation2020; Warren and Lofstedt Citation2021) and Latin America (Burki Citation2020).

One example of mixed messaging occurred in communication about the efficacy of wearing masks and face coverings early in the pandemic (Noar and Austin Citation2020). In February 2020, the World Health Organization insisted that masks were not likely to prevent infection and that only sick people should use them, even though there was some evidence for the effectiveness of masks (Peeples Citation2020). At the same time, mask-wearing was common in Asian countries. Some countries reversed course on masking recommendations by the beginning of April 2020. For example, the Center for Disease Control (CDC) in the US tweeted a recommendation to wear ‘cloth face coverings to slow the spread of COVID 19 in public settings where other social distancing measures are difficult to maintain (e.g. grocery stores, pharmacies, etc.) especially in areas of significant community transmission.’ It took the World Health Organization many months to officially acknowledge asymptomatic transmission and that the virus could be transmitted by aerosols (not just droplets produced by coughing and sneezing) such that masks were an important tool (Lewis Citation2022).

As a result, many members of the public were confused about whether cloth face masks were effective, and skepticism about the efficacy of masks persisted even once stronger evidence emerged that cloth masks offered some protection (Noar and Austin Citation2020). These apparent conflicting messages left many thinking that scientists do not really know whether masks are effective (i.e. that they are not epistemically competent) or that they are giving advice on the basis of political considerations, rather than evidence (i.e. they are not morally reliable).

Of course, the challenges that science communicators were facing in those moments were complicated. The reluctance to recommend mask-wearing early in the pandemic was driven by both practical and epistemic considerations. First, there were widespread shortages of medical grade masks (such as the N-95 masks) for which there was significant evidence of efficacy (given their known filtration and capacities, their ability to fit tightly over a person’s nose and mouth, and their efficacy for similar virus particles). Thus, science communicators at that moment were focused on motivating action with the intention of protecting public health. That is, they needed to motivate people to refrain from buying medical grade masks because they were already in short supply among healthcare workers that were most at-risk of being exposed to high concentrations of the virus and who we needed to stay healthy enough to continue to care for others. However, this not-so-hidden motive may have led them to communicate in ways that undermined their moral reliability, because people were told that masks would not protect them, which was not accurate. Second, there was not significant evidence about the effectiveness of cloth masks in February of 2020 (though it was known they were likely not to be as effective as medical-grade masks). When the messaging changed in April 2020 and beyond, it was in light of new evidence (evidence which continued to grow and provide greater understanding, leading to evolving recommendations wearing masking).

The honest message would have been that we are not confident about the level of protection cloth masks offer, but it is likely better than nothing. This message would have also laid the groundwork for future recommendations for the public to ‘upgrade’ their quality of masks, once supplies of N-95 masks or their equivalents were more plentiful (a message that some still have not grasped as of the writing of this contribution). This message would have done more to facilitate trust in the long run, although it is not clear whether it would have been sufficient to motivate people to stop buying N-95 masks during a shortage. Still, there were other ways to constrain public behaviour in light of the truth, such as regulating the sales of medical-grade masks (which also happened). The problem with mixed messaging (or even the perception of mixed messaging) is that it damages trust in scientists and public health officials beyond the moment and scope of information being given.

3.2. Focus on results rather than how science works

Some laypersons concluded that scientists simply don’t know when they were confronted with apparently contradictory messages about masking, or conflicting studies about whether a particular treatment was effective or not against COVID-19. This is in part because there often false assumptions about how science works that can impact trust (John Citation2018). For example, when people believe (wrongly) that science is about proving theories true or false with certainty, they can think that a study that seems to contradict a widely held finding implies that scientists ‘just don’t know.’ Or laypersons might believe that, for example, ‘all peer reviewed studies are equal,’ without appreciating that some studies should carry more weight in virtue of their sample size or if they are actually a meta-analysis or review of several studies. If false assumptions about the nature of science are widespread, this can also generate distrust about particular scientific findings or about the epistemic competency of scientists or their methods (John Citation2018). Thus, it can be a mistake to focus only on the findings of a particular scientific study.

In communicating about the evidence for why even cloth masks offer protection against COVID-19, science communicators often pointed to single scientific studies that supported this view. But, particularly given previous conflicting messages that they were not effective, some of the communication needed to be about how and why new evidence was achieved and why the evidence on the whole supports the efficacy of mask-wearing. In communicating with laypersons, there also needs to be communication about how science works – that science is a process of continually generating evidence that can change over time. Our best supported theories can also change over time in response to new or more evidence. Thus, part of the responsibility of science communicators is to help laypersons understand how science works and what are the features of science or the current evidence that make it reliable (albeit fallible).

3.3. Use of technical jargon

Another potential failure during the COVID-19 pandemic, was the use of technical jargon that often went unexplained. For example, when regulatory agencies announced approvals for COVID-19 vaccines, they often used technical terms that confused some members of the public. The first vaccines where granted ‘conditional marketing authorization’ in the European Union and ‘Emergency Use Authorization’ in the US. These approval mechanisms, often used in responding to novel threats, provide accelerated approval of drugs, devices, or diagnostic procedures with less comprehensive data than is normally required. Known benefits or potential benefits must outweigh known risks or potential risks to receive such approvals. While scientists and science journalists often did report on what had been given conditional marketing or emergency authorization approval, it is not clear that audiences understood what these terms meant (Quinn, Jamison, and Freimuth Citation2020). During the Influenza A (H1N1) pandemic, researchers found that patients had a poor understanding of such terminology, associating it with ‘experimental,’ and ‘accelerated approval,’ to which they had negative reactions (Quinn, Jamison, and Freimuth Citation2020, 355–356). Consequently, the same patients also expressed significant reluctance or outright refusal in taking a vaccine or therapeutic with ‘only’ a conditional or emergency approval, as opposed to full authorization.

The use of technical jargon can be problematic for many reasons. It can undermine several goals of science communication including understanding, motivating action, or generating interest. Most importantly for our purposes, it can also undermine trust. When technical jargon is used, it can be interpreted in different ways, some of which may cause laypersons to doubt whether the scientists or scientific institutions are working in the public interest. (After all, is ‘accelerated’ or ‘experimental’ really in the public interest?) In addition, the use of jargon may inadvertently communicate a lack of caring about whether it is understood, or as an instance of using technical language to obscure uncomfortable truths (e.g. we do not have enough evidence yet for a full approval). Simple and accessible information is needed not only to facilitate trust but also to promote understanding and enable decision-making.

3.4. Masking values

Science communication that aims to motivate action or policy is not merely descriptive. That is, it intends to convey what people ought to do and why. Yet motivating action, whether it be advising certain behaviours, recommending a medical intervention, or establishing a policy, requires not just scientific information, but also value judgments (de Melo-Martín and Intemann Citation2018; de Melo-Martín and Intemann Citation2012). It requires value judgments about the state of affairs that should bring about and what is the best way to accomplish that (all things considered). To claim that individuals ought to get a COVID-19 vaccine requires not only showing that a vaccine is safe and effective, but also requires showing that the time, inconvenience, mild side effects or minimal risks, and so on are worth it. Often times, the value judgment needed to get from information to action is uncontroversial and widely held. For example, basic hygiene such as hand washing has virtually no risks and is beneficial to individuals for a variety of reasons. People were encouraged to wear masks because they were also assumed not to have any risks and offered some protection against infection.

Yet the values that played a role in decisions about wearing and requiring masks were more complicated than anticipated. First, certain populations believe that the wearing of masks indoors around those not in one’s own household can carry significant burdens (He et al. Citation2021; Esmaeilzadeh Citation2022). The most prevalent concern was physical discomfort in wearing masks, particularly in excessive heat or while engaged in physical exertion (He et al. Citation2021). Some reported communication difficulties in projecting one’s voice or understanding others, and masks limit our ability to communicate and understand with facial expressions (Esmaeilzadeh Citation2022). This burden is obviously more pronounced for those with disabilities or Non-Native language speakers who are able to understand more easily when they are able to see someone’s lips moving. At certain points during the pandemic, masks were also difficult find and/or expensive to buy. These burdens were perceived to be more significant by some, who also tended to believe that the benefits of mask-wearing were questionable. Many did not believe that the risks of getting COVID were that bad (at least for them) (He et al. Citation2021). Even among those who did not wish to get sick, they recognized that wearing a mask will not guarantee the prevention of infection. For many of us, the benefits still outweighed the costs – particularly those of us who might value doing everything we can to help protect communities or those who are vulnerable. But this is a value judgment and it is one for which there can be disagreement. Individuals may disagree about what their individual responsibilities are for protecting others and how much sacrifice is warranted. These disagreements become even more pronounced when thinking about the enactment of policies, such as mask mandates (He et al. Citation2021). Supporting requirements also depend on certain value judgments about what the role of government is and the extent to which individual liberty can be restricted for the good of communities or nations. In the US where there are significant populations that place high value on individual liberty and are wary of ‘government control,’ there is likely to be resistance to supporting mandates.

When science communicators attempt to motivate action by suggesting that public health policies or recommendations are only about ‘following the science,’ this obscures the fact that there are also implicit value judgments at stake. This can potentially undermine trust by calling into question whether the values of scientific institutions or public health officials are genuinely representative. It may lead some to conclude that science communicators have an ‘agenda’ and it is one that doesn’t align with some of the values various communities hold. In other words, individuals or groups may believe that science is not really working towards their interests (even if they are working for the interest of someone or something).

3.5. Failure to be audience-specific

While science communication can require simple and concise messaging (Noar and Austin Citation2020), it is also the case that different audiences may have different interests, needs, and circumstances and those differences can be important to increasing understanding, motivating action, or facilitating trust. Consider for example, the simple message that was utilized early in the pandemic by public officials: ‘Stay Home. Stay Safe’ (Noar and Austin Citation2020, 1735). While this was simple and clear, it was advice that not everyone could (or should) follow. Staying at home was not feasible for those not able to work from home, including essential workers or hourly employees who simply could not afford to work. While some countries had or created policies to support those families, many others did not. In addition, within some communities, ‘staying at home’ meant being in very close quarters with a significant number of people, sometimes sharing a bed. Thus, the simple message to ‘stay at home’ did not offer advice that was likely to make some individuals safer (depending on what the alternatives were). Again, the result is the potential to undermine confidence that scientists or public health officials have (at least certain communities) interests in mind. Indeed, when communities are given advice that appears to be impossible to follow or does not speak to their circumstances it can create a perception that experts ‘don’t know about and don’t care about people like me.’

What could have been more helpful (and what was done in some cases) was to develop more targeted audience-specific recommendations to address the variety of circumstances that certain communities faced. For example, recommendations on how to minimize risks in crowded households, or how to minimize risks in urban versus rural areas, or how to minimize risks when gathering is necessary would have helped speak to the diversity of circumstances in societies.

Differences in circumstances of audiences are further exacerbated by differences in values, cultural beliefs, and worldviews as described in the last subsection. One of the rationales articulated in urging people to wear face masks was altruistic: to protect others (Cheng, Lam, and Leung Citation2020). While this messaging was effective with those who value community wellbeing, solidarity, or protecting those who are vulnerable, it was not effective with those who give less weight to those values (Druckman et al. Citation2021). Indeed, some communicators ended up describing those who did not wear masks as ‘selfish,’ ‘anti-social,’ or worse. But this can exacerbate the sense that scientists and public officials have different values and are not working in the interest of all.

In these sorts of cases, targeting communication to show how a recommendation can align with a broad array of values can be an effective way to build consensus and motivate action for diverse populations (Goldenberg Citation2021). Instead of casting those who are mask-reluctant as selfish or antisocial, explain why wearing a mask will promote other values. Mask-wearing it could be argued promotes individual freedom by maximizing one’s ability to do more things while minimizing risks. Mask-wearing also helped to keep economies open and allowed restaurants and other indoor spaces to be safer so that they can function and stay staffed. While action requires more than ‘just’ science, there are a variety of values that can help motivate the same actions – we just need to speak to them. Doing so can help facilitate trust – particularly the sense that one’s interests are being considered and protected. It can also avoid undermining trust by vilifying those who may have different values.

3.6. Hype and alarmism

Hype is overly optimistic exaggeration in science communication that can occur explicitly or implicitly (Intemann Citation2020). Consider communication about chloroquine (CQ) and hydroxychloroquine (HCQ) as potential COVID-19 therapeutics. Early in the pandemic, a number of science communicators (including drug regulators, policymakers, politicians, scientists, health officials, professional associations, journal editors, publishers, and clinicians) rushed to communicate the promise of CQ and HQ as offering protection or treatment for COVID-19 (Singh and Ravinetto Citation2020). Based on a pre-print study from China (Chen et al. Citation2020) and a small French study (Gautret et al. Citation2020) that was published in a matter of days, the World Health Organization and its partners launched the international ‘Solidarity Trial’ to compare four treatment options (CQ/HCQ, remdesivir, lopinavir, and lopinavir with an anti-inflammatory drug) with the standard of care for hospitalized COVID patients (Ghebreysus Citation2020). A day later the UK launched a similar national effort (Department of Health and Social Care Citation2020). At the same time that these efforts were being launched, US President Donald Trump, French President Emmanuel Macron, and Brazilian President Jair Bolsanaro, among others, touted the promise of the malaria drug hydroxychloroquine in both preventing COVID-19 infections and treating the disease (so as to avoid severe illness or death) (e.g. Ledford Citation2020, Warraich Citation2020). Yet both early studies involved fewer than 40 patients and neither were randomized controlled clinical trials. The Chinese study had not even been peer-reviewed. Neither study was designed to determine whether the drug was safe or effective in preventing infection. Despite this, two days later, the US Center for Disease Control (CDC) published guidance on dosing information on HCQ and CQ that was guided by unattributed anecdotes rather than peer-reviewed science (Taylor and Roston Citation2020).

Presumably, those who touted the potential promise of CQ and HCQ were trying to reassure members of the public in the face of a novel public health threat that had no known treatment. Moreover, they may have thought that hopeful enthusiasm about the treatment was warranted, because CQ and HCQ were drugs that had already been approved for treating other illnesses and were already known to have anti-inflammatory effects. Yet the communication around CQ and HQC also neglected several potential risks and limitations about the evidence for this treatment in relation to COVID. The drug (and particularly the CQ version) was known to have harmful side effects. It was approved for conditions such as malaria and lupus because their proven efficacy in treating those diseases was found to be beneficial enough to outweigh any potential side effects. But whether it could have serious side effects in COVID patients, or whether it would provide any benefit in preventing or treating serious complications from COVID-19 so as to offset those risks, was unknown. These risks and uncertainties were not effectively communicated.

The hype surrounding CQ and HQC had several problematic consequences. Patients who needed these drugs for their approved use – in treating lupus and rheumatoid arthritis – faced a shortage (Mehta, Salmon, and Ibrahim Citation2020). Patients with COVID began refusing to enrol in clinical trials for other treatments, because they wrongly assumed hydroxychloroquine would be the most effective (Ledford Citation2020). The perception that hydroxychloroquine was safe and effective to treat COVID-19 even led some patients with access to the drug to overdose (Chai et al. Citation2020). Most importantly for our purposes, the ways in which public health institutions and leaders communicated about CQ and HCQ likely undermined public trust, particularly when these treatments failed (Singh and Ravinetto Citation2020). The failure to communicate about uncertainties, limitations, or risks can call into question the moral reliability of science communicators (and the institutions they represent). It also casts doubt on the extent to which health professionals or communicators genuinely have the public interest in mind (or whether they were more concerned with protecting their jobs). Some research has found that when scientists are open about uncertainties, it can increase the credibility of scientists (Hendriks, Janssen, and Jucks Citation2022).

Exaggeration can also be problematic when it is overly pessimistic. Alarmism is an error of communication that occurs when science communicators exaggerate the risks and uncertainties in ways that are not fully supported by the evidence we have so far (Intemann Citation2020). Scientists are trained to be skeptical and not overstate the existing evidence, but this can also translate into overly cautious messaging that can also be misinterpreted by the public in ways that have the potential to undermine trust (Dudley et al. Citation2021). The New York Times reported on what it called ‘vaccine alarmism,’ or the trend by public health officials to communicate the vaccine in an overly-cautious or pessimistic way (Leonhardt Citation2021). Consider the following kinds of messages that came out with the approval of the mRNA COVID-19 vaccines:

  • The vaccines are not 100% effective, so you can still contract the virus, even if you are vaccinated.

  • Vaccinated people can still be contagious, so you should continue to avoid large groups and/or wear masks even if everyone is vaccinated.

  • The vaccines are less effective with new variants and future variants may render the vaccines ineffective.

The claims in these messages are, technically, accurate. Moreover, being honest about the limitations of the vaccines might be important to demonstrating moral reliability, which is an important component of trust. But moral reliability requires more than just honesty about the limitations. It requires presenting those limitations, risks, or uncertainties in ways that are not misleading. These messages, especially taken by themselves present an overly pessimistic view of the limitations of the vaccines in ways that are not likely to motivate people, especially those who may be vaccine hesitant, to get vaccinated. This messaging frames information about the vaccine in ways to make it sound completely ineffectual. Yet the vaccines are highly effective and particularly against hospitalization and death. Being overly pessimistic or alarmist can undermine trust (just like hype) because it may lead people to wonder why experts are recommending vaccination if it will do relatively little to benefit patients. Moreover, it sets the bar for action to be prevention of infection, which may not be a feasible goal and is likely not a goal that reflects the values and interests of many patients. Being attentive to the full range of goals that people care about (not dying, not being hospitalized, not overwhelming hospitals) is important to demonstrate a commitment to public wellbeing.

4. Science communication lessons for facilitating trust

While the examples above are not exhaustive, they illustrate several types of common errors in science communication that can impact public trust. They also illustrate the complex challenges of science communication. The goals of science communication can conflict so that promoting one may involve tradeoffs with another. Moreover, even communicating in ways that facilitate trust can sometimes conflict with other goals of communication. For example, communicating in ways that are honest or that makes transparent the values at stake can facilitate trust but can also sometimes thwart motivating individuals to act in certain ways (if the truth is messy or the values are not shared).

What constitutes responsible science communication may depend on the context, which aims are most important to that context, and the specific audience the communication is targeting (Medvecky and Leach Citation2019). Responsible communication involves trying to balance competing considerations. Thus, there are no set rules that will always result in responsible or effective communication that facilitates trust for everyone. Nonetheless, the above examples show that there are specific considerations to keep in mind and strategies that will help protect and promote the three dimensions of trust (epistemic competency, moral reliability, and commitment to the public interest). These are:

  1. Have a clear sense of who your audience is and what they care about in relation to decision-making.

  2. Strive to present information that is not only accurate but relevant to the kinds of decisions those audiences want to make.

  3. Develop targeted, audience-specific messaging in cases where there are populations who have different circumstances, needs, and interests.

  4. Explain – as clearly and concisely as possible – why the evidence for the findings being presented is epistemically reliable or competent.

  5. Try to present evidence, findings, or recommendations in ways that acknowledge or speak to a diversity of values or serve a diversity of public interests.

  6. Remind audiences how science works (or set expectations for what might change).

  7. Articulate benefits and risks in a balanced manner.

While the various goals of science communication can conflict or involve tradeoffs, no communication will be effective if either the communicator or the scientists and institutions that the communicator represents are found to be untrustworthy. Thus, the other goals of science communication must be balanced in ways that protect public confidence that scientists and scientific institutions are (i) epistemically competent, (ii) morally reliable, and (iii) working in the public interest. More research is needed on exactly how these three dimensions can be best promoted in scientific communication practices. Some (e.g. John Citation2018; Odenbaugh Citation2008) have argued that transparency and honesty about how science works or the values involved can actually undermine trust in some cases (just as seeing how sausage is made can undermine the public desire to eat sausages). Yet helping various audiences understand how science works and setting reasonable expectations can potentially help address these concerns (Altenmüller, Nuding, and Gollwitzer Citation2021). It is generally better to give people good reasons to think that scientists are epistemically competent and reliable and are working in the public interest, because if that is not really the case, then public trust in science will be in a precarious position that none of us can afford.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Kristen Intemann

Kristen Intemann is professor in the Department of History and Philosophy and director for the Center for Science, Technology, Ethics & Society at Montana State University. She works on research ethics, objectivity, and bias, feminist philosophy of science, environmental ethics, and biomedical ethics.

References

  • Altenmüller, Marlene S., Stephen Nuding, and Mario Gollwitzer. 2021. “No Harm in Being Selfcorrection: Self-Criticism and Reform Intentions Increase Researchers’ Epistemic Trustworthiness and Credibility in the Eyes of the Public.” Public Understanding of Science 30 (8): 962–976.
  • Baier, Anette. 1986. “Trust and Anti-trust.” Ethics 96: 231–260.
  • Bromme, R., N. G. Mede, E. Thomm, B. Kremer, and R. Ziegler. 2022. “An Anchor in Troubled Times: Trust in Science Before and Within the COVID-19 Pandemic.” PloS one 17 (2): e0262823.
  • Burki, Talha. 2020. “Covid-19 in Latin America.” The Lancet Infectious Diseases 20 (5): 547–548.
  • Chai, P. R., E. G. Ferro, J. M. Kirshenbaum, B. D. Hayes, S. E. Culbreth, E. W. Boyer, and T. B. Erickson. 2020. “Intentional Hydroxychloroquine Overdose Treated with High-Dose Diazepam: An Increasing Concern in the COVID-19 Pandemic.” Journal of Medical Toxicology 16: 314–320.
  • Chen, Z., J. Hu, S. Jiang, S. Han, D. Yan, R. Zhuang, and B. Hu. 2020. “Efficacy of Hydroxychloroquine in Patients with COVID-19: Results of a Randomized Clinical Trial.” MedRxiv. https://www.medrxiv.org/content/10.11012020.03.22.20040758v3.
  • Cheng, K. K., T. H. Lam, and C. C. Leung. 2020. “Wearing Face Masks in the Community During the COVID-19 Pandemic: Altruism and Solidarity.” The Lancet 399 (10336): e39–e40.
  • Cunningham-Burley, Sarah. 2006. “Public Knowledge and Public Trust.” Public Health Genomics 9 (3): 204–210.
  • de Melo-Martín, Inmaculada, and Kristen Intemann. 2012. “Interpreting Evidence: Why Values Can Matter as Much as Science.” Perspectives Biolology and Medicine 55 (1): 59–70.
  • de Melo-Martín, Inmaculada, and Kristen Intemann. 2018. The Fight Against Doubt: How to Bridge the gap Between Scientists and the Public. Oxford: Oxford University Press.
  • Department of Health and Social Care, United Kingdom. 2020. “World’s Largest Trial of Potential Coronavirus Treatments Rolled out Across the UK.” https://www.gov.uk/government/news/worlds-largest-trial-of-potential-coronavirus-treatments-rolled-out-across-the-uk.
  • Druckman, J. N., S. Klar, Y. Krupnikov, M. Levendusky, and J. B. Ryan. 2021. “Affective Polarization, Local Contexts and Public Opinion in America.” Nature Human Behaviour 5 (1): 28–38.
  • Dudley, M. Z., R. Bernier, J. Brewer, and D. A. Salmon. 2021. “Walking the Tightrope: Reevaluating Science Communication in the Era of COVID-19 Vaccines.” Vaccine 39 (39): 5453–5455.
  • Esmaeilzadeh, Pouyan. 2022. “Public Concerns and Burdens Associated with Face Mask-Wearing: Lessons Learned from the COVID-19 Pandemic.” Progress in Disaster Science 13: 100215.
  • Galaitsi, S. E., J. C. Cegan, K. Volk, M. Joyner, B. D. Trump, and I. Linkov. 2021. “The Challenges of Data Usage for the United States’ COVID-19 Response.” International Journal of Information Management 59: 102352.
  • Gautret, Philippe, Jean-Christophe Lagier, Philippe Parola, Van Thuan Hoang, Line Meddeb, Morgane Mailhe, Barbara Doudier, et al. 2020. “Hydroxychloroquine and Azithromycin as a Treatment of COVID-19: Results of an Open-label non-Randomized Clinical Trial.” International Journal of Antimicrobial Agents 56 (1): 105949.
  • Ghebreyesus AT. “WHO Director-General’s Opening Remarks at the Media Briefing on COVID-19 – 18 March 2020.” 2020. https://www.who.int/dg/speeches/detail/who-director-general-s-opening-remarks-at-the-media-briefing-on-covid-19%2D%2D-18-march-2020.
  • Goldenberg, Maya J. 2021. Vaccine Hesitancy: Public Trust, Expertise, and the War on Science. Pittsburgh: University of Pittsburgh Press.
  • Hardin, Russell. 2006. Trust. Cambridge: Polity Press.
  • Hardwig, John. 1991. “The Role of Trust in Knowledge.” Journal of Philosophy 88 (12): 693–708.
  • He, Lu, Changyang He, Tera L Reynolds, Qiushi Bai, Yicong Huang, Chen Li, and Kai Zheng. 2021. “Why Do People Oppose Mask Wearing? A Comprehensive Analysis of US Tweets During the COVID-19 Pandemic.” Journal of the American Medical Informatics Association 28 (7): 1564–1573.
  • Hendriks, Friederike, Dorothe Kienhues, and Rainer Bromme. 2015. "Measuring Laypeople’s Trust in Experts in a Digital Age: The Muenster Epistemic Trustworthiness Inventory (METI)." PloS ONE 10 (10): e0139309.
  • Hendriks, Friederike, Inse Janssen, and Regina Jucks. 2022. “Balance as Credibility? How Presenting One- vs. Two-Sided Messages Affects Ratings of Scientists’ and Politicians’ Trustworthiness.” Health Communication, 1–8. doi:10.1080/10410236.2022.2111638.
  • Intemann, Kristen. 2020. “Understanding the Problem of ‘Hype’: Exaggeration, Values, and Trust in Science.” Canadian Journal of Philosophy 52 (3): 279–294.
  • John, Stephen. 2018. “Epistemic Trust and the Ethics of Science Communication: Against Transparency, Openness, Sincerity and Honesty.” Social Epistemology 32 (2): 75–87.
  • Ledford, Heidi. 2020. “Chloroquine Hype is Derailing the Search for Coronavirus Treatments.” Nature 580 (7805): 573.
  • Leonhardt, David. 2021. “Vaccine Alarmism.” New York Times, February 19 https://www.nytimes.com/2021/02/19/briefing/ted-cruz-texas-water-iran-nuclear.html.
  • Lewis, Dyani. 2022. “Why the WHO Took Two Years to Say COVID is Airborne.” Nature 604 (7904): 26–31.
  • Mason, Chris, and Elisa Manzotti. 2009. “Induced Pluripotent Stem Cells: An Emerging Technology Platform and the Gartner Hype Cycle.” Regenerative Medicine 4: 329–331.
  • Master, Zubin, and Vural Özdemir. 2008. "Selling Translational Research: Is Science a Value-Neutral Autonomous Enterprise?." The American Journal of Bioethics 8 (3): 52–54.
  • Master, Zubin, and David B Resnik. 2013. “Hype and Public Trust in Science.” Science and Engineering Ethics 19 (2): 321–335.
  • Mayer, Roger C., James H. Davis, and F. David Schoorman. 1995. “An Integrative Model of Organizational Trust.” Academy of Management Review 20 (3): 709–734.
  • McKaughan, Daniel J., and Kevin C Elliott. 2013. “Backtracking and the Ethics of Framing: Lessons from Voles and Vasopressin.” Accountability in Research 20 (3): 206–226.
  • McKaughan, Daniel J., and Kevin Elliott. 2018. “Just the Facts or Expert Opinion? The Backtracking Approach to Socially Responsible Science Communication.” In Ethics and Practice in Science Communication, edited by S. Priest, J. Goodwin, and M. F. Dahlstrom, 197–213. Chicago, IL: University of Chicago Press.
  • Medvecky, Fabien, and Joan Leach. 2019. An Ethics of Science Communication. Cham: Springer.
  • Mehta, Bella, Jane Salmon, and Said Ibrahim. 2020. “Potential Shortages of Hydroxychloroquine for Patients with Lupus During the Coronavirus Disease 2019 Pandemic.” JAMA Health Forum 1 (4): e200438.
  • Nagler, R. H., R. I. Vogel, S. E. Gollust, A. J. Rothman, E. F. Fowler, and M. C. Yzer. 2020. “Public Perceptions of Conflicting Information Surrounding COVID-19: Results from a Nationally Representative Survey of US Adults.” PloS one 15 (10): e0240776.
  • Nisbet, Matthew C., and Dietram A Scheufele. 2009. “What’s Next for Science Communication? Promising Directions and Lingering Distractions.” American Journal of Botany 96 (10): 1767–1778.
  • Noar, Seth M., and Lucinda Austin. 2020. “(Mis)Communicating About COVID-19: Insights from Health and Crisis Communication.” Health Communication 35 (14): 1735–1739.
  • Odenbaugh, Jay. 2008. “Ecology and the Inescapability of Values.” Science and Engineering Ethics 14 (4): 593–596.
  • Peeples, Lynne, et al. 2020. “What the Data Say about Wearing Face Masks.” Nature 586 (7828): 186–189.
  • Priest, Susanna. 2013. “Can Strategic and Democratic Goals Coexist in Communicating Science? Nanotechnology as a Case Study in the Ethics of Science Communication and the Need for ‘Critical’ Science Literacy.” In Ethical Issues in Science Communicatin: A Theory-based Approach, edited by J. Goodwin, M. F. Dahlstrom, and S. Priest, 229–243. Charleston, SC: CreateSpace.
  • Quinn, Sandra C., Amelia M. Jamison, and Vicki Freimuth. 2020. "Communicating Effectively about Emergency Use Authorization and Vaccines in the COVID-19 Pandemic." American Journal of Public Health 111 (3): 355–358.
  • Rowan, Katherine E. 1991. "When Simple Language Fails: Presenting Difficult Science to the Public." Journal of Technical Writing and Communication 21 (4): 369–382.
  • Ruiu, Maria L. 2020. “Mismanagement of Covid-19: Lessons Learned from Italy.” Journal of Risk Research 23 (7–8): 1007–1020.
  • Scheman, Naomi. 2011. Shifting Ground: Knowledge and Reality, Transgression and Trustworthiness. Oxford: Oxford University Press.
  • Schrage, Michael. 2004. “Great Expectations.” MIT Technology Review 107 (8): 2.
  • Singh, Jerome A., and Rafaella Ravinetto. 2020. “COVID-19 Therapeutics: How to Show Confusion and Break Public Trust During International Public Health Emergencies.” Journal of Pharmaceutical Policy and Practice 13 (1): 1–7.
  • Taylor, Marisa, and Aram Roston. 2020. “Pressed by Trump, U.S. Pushed Unproven Coronavirus Treatment Guidance.” https://news.trust.org/item/20200404170001-la0q3/.
  • Warren, George W., and Ragnar Lofstedt. 2021. “Risk Communication and COVID-19 in Europe: Lessons for Future Public Health Crises.” Journal of Risk Research 25 (10): 1–15.
  • Warraich, Haider J. 2020. “The Risks of Trump’s Hydroxychloroquine Hype.” New York Times, May 19. https://www.nytimes.com/2020/05/19/opinion/trump-hydroxychloroquine-coronavirus.html.
  • Weingart, Peter, Anita Engels, and Petra Pansegrau. 2000. “Risks of Communication: Discourses on Climate Change in Science, Politics, and the Mass Media.” Public Understanding of Science 9 (3): 261–284.
  • Weingart, Peter, and Lars Guenther. 2016. “Science Communication and the Issue of Trust.” Journal of Science Communication 15 (5): 1–11.
  • Wilholt, Torsten. 2013. “Epistemic Trust in Science.” British Journal for the Philosophy of Science 64 (2): 233–253.