1,905
Views
2
CrossRef citations to date
0
Altmetric
Research Article

Science Advice in an Environment of Trust: Trusted, but Not Trustworthy?

&

ABSTRACT

This paper examines the conditions of trustworthy science advice mechanisms, in which scientists have a mandated role to inform public policymaking. Based on the literature on epistemic trust and public trust in science, we argue that possession of relevant expertise, justified moral and political considerations, as well as proper institutional design are conditions for trustworthy science advice. In order to assess these conditions further, we explore the case of temporary advisory committees in Norway. These committees exemplify a de facto trusted and seemingly well-functioning science advice mechanism. Still, this mechanism turns out to poorly realize some central conditions of trustworthy science advice. From this we draw three lessons. Firstly, it remains crucial to distinguish between well-placed and de facto trust. Secondly, some conditions of trustworthy science advice seem more significant than others and there are thresholds for realizing each condition. Thirdly, not only does the institutional design and organization of science advice matter more than often recognized; the trust and trustworthiness of the broader social and political context and institutional environment make a difference as well.

Introduction

It is often lamented that there is a decline in public trust in scientific expertise. Well-known cases are climate change denial (Dunlap and McCright Citation2011), vaccine hesitancy (Goldenberg Citation2021), and the populist bashing of experts. Given the role and promise of science advice in policymaking, a lack of public trust can have tangible and harmful effects by undermining the prospects of well-informed policies. However, public trust in scientific expertise may not always be warranted. Trust is valuable only when it is directed towards agents and institutions that are trustworthy (O’Neill Citation2018, 293). Surely, trusting the wrong experts could be damaging, and even disastrous, for example during a deadly pandemic or a social and economic crisis. It is often difficult for citizens and even experienced policymakers to know whether experts are trustworthy, especially in policy areas where knowledge is technical, complex, and uncertain (O’Neill Citation2018, 295–296). There is often disagreement on which experts to trust in concrete policy controversies, and, more fundamentally, on what basis experts should be trusted. More precise assessments of when fading trust in science is bad and not so bad (and even a good thing) requires a clear idea of what it means to be trustworthy (Hardin Citation2006, 1). This paper examines the conditions of trustworthy science advice mechanisms, in which scientists have a mandated role to inform public policymaking.

The issue of trust in science and expertise has lately become a more central topic in the philosophical literature, and recent contributions have provided insight into some of the conditions for well-placed trust in science (Almassi Citation2012; Irzik and Kurtulmus Citation2019; Oreskes Citation2019; Rolin Citation2021, Citation2020; Schroeder Citation2020). Yet, there have so far been few attempts to systematically assess a broad spectrum of such conditions as applied to science advice. Based on the literature on epistemic trust and trust in science, we will argue that possession of relevant expertise, justified moral and political considerations, as well as proper institutional design are conditions for trustworthy science advice. In order to assess these conditions further, we will explore the case of temporary advisory committees in Norway. These committees exemplify a de facto trusted and seemingly well-functioning science advice mechanism. Still, this mechanism turns out to poorly realize some central conditions of trustworthy science advice. What are the lessons that can be drawn from this? Is weak fulfillment of some conditions of trustworthiness a reason to distrust? What, if anything, can justify trust in this science advice mechanism, given its shortcomings? We suggest that our case exemplifies the critical importance of distinguishing between well-placed and de facto trust, but also that trust may be warranted even when the realization of conditions of trustworthiness are limited, in part because some conditions seem more significant than others, in part because the trustworthiness of the broader institutional and social context matters.

The paper is structured in the following manner. The first part proposes a set of conditions for trustworthy science advice. The second part explores what the case of the Norwegian advisory committees can teach us about trust and well-placed trust in science advice mechanisms.

Trustworthy Science Advice

Science Advice Mechanisms

In this paper, we examine trust in science advice mechanisms, (i.e. institutions) designed to include scientists and other experts in public policymaking to provide warranted knowledge relevant to policy issues and policy recommendation based on that knowledge. Science advice is here understood as particular institutional arrangements, such as advisory committees, boards, or panels that incorporate scientific expertise. Our notion of science advice is broad, but in accordance with standard use (see for instance SAPEA Citation2019, 111–117). First, science advice need not be exclusively composed of scientific experts, as long as the contribution of scientific expertise is substantial. Science advice mechanisms often amount to forms of collaboration between scientists and other actor groups, such as civil servants and stakeholders.Footnote1 Second, science advice mechanisms are comprised not only of experts from the natural sciences but also of those with backgrounds from the social sciences, humanities, law, engineering, and medicine.

While our notion of science advice mechanisms is sufficiently open to include collaborative and interdisciplinary approaches, it should be distinguished from the more informal role experts can take on in the public sphere.Footnote2 Science advice mechanisms, as we understand them here, amount to mandated science advice, that is, when scientific experts have a formalized role in public policymaking. The mandate provides the scientists with a particular kind of authority and influence in governance processes and public rule. With such influence comes a certain role responsibility and a distinct set of normative expectations (Gundersen Citation2018), such as policy relevance and applicability, that do not apply in the contexts of ordinary scientific research, or when scientists operate more freely in the public sphere, such as in the press, social media, or other public arenas.Footnote3 As we shall see, this formalized role has bearing on the proper institutional design of the science advice mechanism.

The case we examine in this paper, temporary advisory committees in Norway, exemplifies an important advisory institution typical for the Nordic region (see Arter Citation2008; Tellmann Citation2016; Christensen and Holst Citation2017) and a channel for the dissemination of scientific knowledge into policymaking (Holst et al. Citation2021). The committees are mandated by the government to examine specific policy problems and recommend solutions, for instance concerning climate change, public health, and economic and social policy. The Norwegian committees are ‘hybrid’ in that they may include different types of members, such as civil servants, interest group representatives, and politicians – but also scientists from different academic disciplines and fields (Krick Citation2015). This committee system had the features of a science advice mechanism, performing the important role of providing scientific knowledge to help other committees with their recommendations and analyses, even during the 1970s and 1980s. Moreover, over the last couple of decades there has been a steep increase in the share of scientists among committee members, and an even steeper increase in the share of scientists among chairs (Christensen and Hesstvedt Citation2019). Many of the most important committees lately have been composed almost exclusively of scientists or with a large majority of scientists. In addition, the vast increase over time in the use of references in committee reports, including of scientific publications, indicates a substantive role of scientific knowledge (Christensen Citation2019; Krick, Christensen, and Holst Citation2018). Hence, this committee system clearly qualifies as a channel for science advice under our definition.

Epistemic Trust in Institutions

The main rationale for granting scientists an influential role in democratic policymaking is that scientific experts can improve the quality, efficiency, and even direction of policymaking by providing warranted knowledge and making it more ‘truth-sensitive’ (Christiano Citation2012). The type of trust the public has in science advice is thus a kind of epistemic trust, in the sense that citizens and their representatives defer to it for valid knowledge and knowledge-based recommendations. Similar to other kinds of trust, epistemic trust is a triadic relation (see for instance Baier Citation1986, 236): A (the public, policymakers) trusts B (an expert, a group of experts, a science advice mechanism) with regard to C (i.e. knowledge claims in a given domain). Epistemic trust in science advice means that the public and policymakers have reason to believe the claims, statements, and assertions provided: ‘Epistemic trust is about taking someone’s testimony that P as a reason to believe that P on the assumption that she is in a position to know whether P and will express her belief truthfully’ (Irzik and Kurtulmus Citation2019, 4).

Now, the paradigm cases of epistemic trust in the literature are expert testimony given by an expert to a lay person. Science advice mechanisms deviate from such inter-individual testimony, in the sense that they normally involve trust in groups of experts, which operates within a formalized institutional system. This means that the kind of trust we examine here differs from the standard model of epistemic trust in two main ways. First, the trust that policymakers and the public have in science advice mechanisms is a kind of institutional trust. Second, the science advice mechanisms typically involve groups of scientists jointly arriving at expert assessments of the current state of knowledge, sometimes in teams with non-experts (Oppenheimer et al. Citation2019). The social interaction and dynamics within the groups of experts and non-experts contributing within the confines of a science advice mechanism is thus essential to understanding the kind of trust we are examining and whether this or the other mechanism merits trust.Footnote4

Trustworthy Science Advice – Three Kinds of Conditions

When is this kind of institutional epistemic trust in science advice mechanisms well-placed, i.e. when can policymakers and the public have good reasons to defer to such mechanisms for advice? In this section, we map conditions of trustworthy science advice. Our list of conditions is based on an insightful but somewhat scattered academic exchange in the recent literature in philosophy of science and social epistemology. Based on our reading of the literature, we distinguish between three main kinds of conditions for trustworthiness. Trustworthiness requires, first, scientific competence from the experts contributing to the science advice mechanism, and, secondly, that the moral and political considerations involved are justified. This is well-known from existing contributions, which all connect, albeit in different ways, reasons to have well-placed trust in science advice not only to competence but also to moral responsibility (O’Neill Citation2018; Irzik and Kurtulmus Citation2019; Rolin Citation2021; Schroeder Citation2020). However, given the mandated role of science advice mechanisms in public policymaking, there is need to introduce also a third type of condition, pertaining to the institutional design of science advice and to features such as independence, transparency, and public deliberation.

Scientific Competency

i) Relevant Scientific Expertise

A first condition for well-placed trust in a science advice mechanism is the contributing experts’ relevant scientific competencies and skills. Indeed, the distinctive rationale for establishing science advice in the first place is to base policymaking on ‘reliable scientific research’ (Irzik and Kurtulmus Citation2019, 6). Trustworthiness derives here partly from the general trust in science as a reliable provider of knowledge. Naomi Oreskes instructively identifies two significant reasons for the general trustworthiness of science, in terms of ‘its sustained engagement with the world’ and ‘its social character’ (Oreskes Citation2019, 56). The first reason has to do with the fact that scientists have – due to their specialization – knowledge about the world, which most people lack. Scientists should be trusted because they are ‘our designated experts for studying the world’ (Oreskes Citation2019, 56). The second reason has to do with how the scientific community collectively comes to accept knowledge claims in an open and self-correcting manner. Rather than focusing on the individual character of scientists, Oreskes upholds that it is the social working of the scientific community and arrangements such as peer review and tenure that make science trustworthy.

Due to specialization, scientific credentials do not grant scientists expert status on all policy issues. This points to the importance of including scientific expertise that is relevant to the area under scrutiny. Epistemic asymmetries between experts and non-experts and limited time make it notoriously difficult for lay people to identify relevant expertise. Policymakers and citizens must often rely on indirect indicators for distinguishing proper experts in some domain from irrelevant experts and quasi experts, such as experts’ affiliations and ‘track-records’ (Oreskes Citation2019, 58; see also Goldman Citation2001; Martini Citation2014; Holst and Molander Citation2017). The advising experts, on their part, must not only be competent, but also have ‘competence-competence’ (Turner Citation2014, 280), that is, to be able to assess whether they in fact have the right expertise for a given task.

ii) Reflecting the Scientific Consensus

In advisory mechanisms, scientists often perform an expert assessment (Oppenheimer et al. Citation2019) of existing research by making syntheses, overviews, and summaries in ways that are relevant to the political issue in question. In making such assessments, contributing experts almost always draw on research outside their own specialty and they need not belong to the ‘core set’ of contributing scientists (Collins and Evans Citation2002). In such endeavors, trustworthy science advice mechanisms are faithful to the current state of knowledge (Goldman Citation2001; Martini Citation2014). By appealing to the scientific consensus, advisory mechanisms avoid common problems of lay deference to science, of depending on anomalous results, and of the views of fringe scientists or pseudo-experts.

iii) Cognitive Pluralism

Compliance with existing research is not, however, a straightforward constraint. To provide a proper assessment of research on a given policy issue often involves taking into account severe uncertainty, scientific disagreement, and limited knowledge. Importantly, relevant contribution may come from several disciplines. This means that the contributing experts should not be unduly biased toward certain fields and that science advice mechanisms should include relevant cognitive pluralism (Wright Citation2019; Holst and Molander Citation2019)

Justified Moral and Political Considerations

In order to merit trust, experts must not only possess the scientific competencies required to perform their task; they must also be aware of the potential social impact of their work and be able to take the estimated impact into consideration. An otherwise competent expert can fail by being reckless, failing to take risks, burdens, and infringements of rights into account. Accordingly, trustworthiness is based not only on scientists’ ability to provide scientific knowledge in an accurate manner, but also on their ability to do so in a way that is morally responsible (Douglas Citation2009; Rolin Citation2021) and honest (Irzik and Kurtulmus Citation2019, 6) and by using good judgment in political questions (Eriksen Citation2020). There are different proposals for how experts should make such value judgments, such as the use of ‘hybrid forums’ (Irzik and Kurtulmus Citation2019, 13; Oreskes Citation2019, 137) or ensuring that experts’ value judgments appeal to values commonly shared by the public and its representatives (Schroeder Citation2020) or are justified from the perspective of reasonable moral and political argument (Kitcher Citation2011; Holst and Molander Citation2019; Wolff Citation2019).

There is an extensive discussion in philosophy of science concerning the role of values in science. For our purposes, we find it useful to distinguish between two main ways in which moral and political values affect the trustworthiness of science advice. In a weaker sense, the contributing scientists must be morally responsible and honest. For instance, scientists who misrepresent or withhold knowledge deliberately or fail to consider how science advice impacts major moral concerns and social distributions fail for basic and rather uncontroversial reasons. Experts that fail to act in accordance with normative expectations based on basic moral intuitions and principles and the norms of science are arguably not trustworthy. In a stronger, and arguably more contested sense, scientific experts must also make value judgments in a way that directly impacts the content of the knowledge they communicate and the recommendations they give. For instance, it has been argued that scientists, due to the risk of error, must make ethical value judgments about where to set the evidential standards for accepting a hypothesis (Douglas Citation2000). If policymakers and citizens place trust in such judgments, this counts as a more extensive or ‘enhanced’ trust than we normally think of in connection with epistemic trust (Irzik and Kurtulmus Citation2019, 1153).

Proper Institutional Design

i) Institutional Independence and Expert Autonomy

Trustworthy science advice also requires adequate institutional organization. Key in this connection is sufficient independence from political and bureaucratic control. The rationale for deferring knowledge assessment to science advice mechanisms is to delegate that task to scientists that investigate, deliberate, and conclude autonomously. The scientific experts must be free to analyze and assess the current state of knowledge according to the best of their knowledge and professional judgment (Owens Citation2015), without undue interference.

ii) Absence of Conflict of Interest

Members of a science advice mechanism might have commercial, ideological, religious, or other personal ties and interests that might conflict with the aim of providing an objective assessment of policy-relevant knowledge. Insofar as the absence of a conflict of interest is a condition for trustworthiness, the existence of conflict of interest should be dealt with, either in terms of criteria for selection as an expert, or by disclosure. A conflict of interest is problematic due to the increased risk of substantial distortion and skewing of results, but also due to what is signaled to the citizens (Friedman Citation2002). This underscores how trustworthiness is not only a function of reliable knowledge and proper moral judgments, but also of how the science advice mechanism communicates to the public about how it arrives at its knowledge assessments and policy recommendations.

iii) Transparency and Procedures for Public Deliberative Scrutiny

The science advice mechanism can be open about expert selection, internal processes, and deliberations (see Elliott Citation2020, for taxonomy of transparency). Hence, transparency concerns disclosing central documents and information about central decisions and the deliberation among the members of the advisory mechanism. In addition, trustworthiness requires arguably not only transparency in this passive sense, but also critical scrutiny and the scientific experts’ active involvement in public deliberations in different bodies and fora, including in the democratic exchange of the wider public sphere (e.g. Moore Citation2017; Chambers Citation2017 on the role of expertise in deliberative democracy).

Exploring Trustworthiness in an Environment of Trust

Advisory Committees in Norway: High Trust and High Achievements

So far, we have provided a systematic account of what makes science advice trustworthy, based on central contributions in the literature. In what follows, we assess these proposed conditions further by exploring the governmental advisory committees in Norway. Interestingly, this committee system is a highly trustedFootnote5 science advice mechanism (e.g. Hesstvedt and Christensen Citation2021; Krick and Holst 2021) operating in a context where trust in science is highFootnote6 and public institutions are both trusted (Greiling Citation2014; Rothstein Citation2021) and considered to perform well (Engelstad et al. Citation2018). Norway ranks at the top of international human development and democracy indexes and is known to have a comparatively well-functioning civil service and public sector (see Bågenholm et al. Citation2021 on ‘quality of government’). What happens when our proposed conditions of trustworthy science advice are applied in a context of this sort? To be sure, if the conditions of trustworthiness are aptly formulated, and generally valid, they should enable us to distinguish sensibly between well-placed and unwarranted trust in contexts of distrust and poor performance, but no less in a context of high trust and high achievements. The latter type of context, which our selected case exemplifies, is under-investigated in studies of trust in science. Yet, this context is far from marginal; consider for example the similar committee systems in the other Nordic countries (Christensen and Holst Citation2017), or trusted science advice mechanisms in Germany, the UK or the EU (Holst et al. Citation2021). We believe that there are valuable lessons to be learnt from examining an advisory mechanism in an environment of trust.

Limited Levels of Expertise, Independence, and Transparency

It should be emphasized, as a first finding, that our selected case of science advice in part fulfills the spelled-out conditions of trustworthiness. Many committees include scientists with high levels of relevant expertise, including as chairs (Christensen and Hesstvedt Citation2019). For example, among those most frequently asked to chair committees are leading professors in disciplines such as law, economics, social science, engineering, and medicine. Committee reports refer also substantively, and over time increasingly, to scientific publications, indicating a tendency to base advice on state of the art in academic research. Even if committees within some policy areas are dominated by one discipline, such as economists in committees on tax policy, there has over time been an increase in committees that are multi-disciplinary, suggesting a concern for relevant cognitive diversity (Hesstvedt Citation2021). The mandates and composition of committees are public, and committee reports are public once they are submitted to the government, thus contributing to transparency (Krick and Holst 2021). Finally, civil society has representatives in several of the committees, and a mandatory hearing procedure ensures that stakeholders and citizens can engage with and deliberate on the reports and deliver written inputs and remarks, providing this committee system with important democratic credentials. Based on our proposed notion of trustworthy science advice mechanisms, there are thus several reasons for regarding trust in Norwegian advisory committees as well-placed.

However, from the perspective of trustworthiness, this committee system also has features that are worrisome. First, there are few standardized procedures in place to ensure scientific quality. Significantly, committee reports are not peer-reviewed in the academic community, and, when the government selects scientists as members and chairs, this seems to be more based on ad hoc assessments of credibility and reputation than on systematic checks of academic records (Hesstvedt and Christensen Citation2021). From the perspective of ensuring relevant expertise and advice in accordance with the current state of knowledge, this is problematic. Secondly, the government exerts decisive control over the composition of committees, the formulation of the committee mandates, and the committee secretariats (Hesstvedt Citation2021; Krick and Holst 2021). This has resulted in member selection based on political criteria, narrow and biased mandates, and reports influenced by bureaucratic and political interests. When we add to this the fact that interest groups in the policy domain under scrutiny are not seldom included around the committee table, and not excluded as a result of conflict-of-interest considerations, this indicates overall a science advice mechanism with limited institutional independence. Lastly, committee proceedings are relatively closed. Even if there are examples of committees with a more public profile during the proceedings (Krick, Christensen, and Holst Citation2018), most committees have closed meetings, do not publish meeting minutes or background documents, and engage meagerly with the public, be it through consultation meetings or digital platforms. Limitations on transparency are thus considerable.

Conditions for Trustworthy Science Advice Re-Visited: Three Lessons

i) Trusted Science Advice Need Not Be Trustworthy

What can this assessment of the Norwegian committees teach us about conditions of trustworthy science advice, and the relation between trust and trustworthiness? First and foremost, the system of Norwegian advisory committees is a trusted science advice mechanism with many merits, operating in an institutional context of trust, which however fails to realize all conditions of trustworthiness. We see this as a reminder that we should neither lament distrust nor hail trust irrespective of trustworthiness considerations. Institutions can be de facto trusted, robust, even effective, without being trustworthy. This finding should not strike us as too puzzling or surprising. There are several reasons for why trust and trustworthiness will often not overlap.

Firstly, actors might have limited awareness of, or even contest, our proposed conditions of well-placed trust. For example, in our case, interviews with committee members and users of committee reports reveal that they do not see absence of conflict of interests as a major problem. Rather, many consider it important that representatives from powerful interest groups are included as committee members, even when they have clear stakes in the matter. To have interests or stakes is not considered a reason to distrust (Hesstvedt and Christensen Citation2021). A related point is that it may not be a priority for the actors involved to increasingly achieve all of our trustworthiness conditions. In our case, bureaucrats, influential stakeholders and involved scientists are well aware of the arguments pro transparency. However, some are still skeptical of making committee proceedings more open to public scrutiny, as they believe this will compromise process efficiency and report quality (Krick and Holst 2021).

A second reason for why trust and trustworthiness will often not overlap is that the conditions that we propose for trustworthy science advice might be difficult to fulfill at the same time, as increased realization of some conditions would tend to spur lower realization of other conditions. One example is how measures to strengthen democratic credentials may compromise institutional independence. On the one hand, when the elected government controls the mandates and composition of advisory committees, this may increase the likelihood of advisory reports on issues and with priorities which reflect the concerns of the majority will. On the other hand, political interference and cherry-picking of experts will tend to challenge the independence and integrity of science advice. Another example is how transparency considerations and the concern for expert autonomy may draw in opposite directions. On the one hand, transparency increases opportunities for broad public attention, criticism from citizens and stakeholders, scrutiny from the media, etc. On the other hand, transparency may also facilitate lobbyism by stakeholders during committee proceedings and put pressure on scientific experts to conform to powerful interests and public opinion rather than being loyal to evidence and arguments.Footnote7 The tensions between some of the proposed conditions indicate that there will be trade-offs between these traits.

Finally, there may be other important parameters in addition to trustworthiness. For example, in our case, stakeholders and bureaucrats cherish the advisory committees because these committees have real influence on public will formation and decision-making. This may spur support for measures that strengthen the links and bonds between the committees and both the government and social and market actors, when such measures are likely to increase the impact of the committee system, even if these measures may also endanger the system’s autonomy and independence from political interests and pressures. These complexities make it easier to understand why even a highly trusted committee system in a trusting environment may be less than fully trustworthy. That notwithstanding, public support to a science advice mechanism should not make us worry less about such deficits if we want trust in institutions to be well-placed.

ii) Some Conditions of Trustworthy Science Advice are More Important Than Others, and There are Thresholds

When a trusted and appreciated science advice mechanisms in a well-functioning political and administrative system does not fulfill the conditions of trustworthiness, this should also spur us to revisit our conditions set. How should these conditions be understood? Is there need for clarifications or amendments?

For one, it could be argued that, as long as the three main conditions – (i.e. scientific competence, justified moral and political considerations, and proper institutional design) – are realized to a certain degree, a science advice mechanism should, all things considered, be considered trustworthy. Complete fulfillment of each condition would seem to be over-demanding. Thus conceived, each of the three main conditions could be understood as thresholds conditions: above a minimal realization on each of them, it seems sensible to consider science advice as trustworthy. For example, while the Norwegian advisory committee system has yet to institutionalize more standardized peer review procedures, most committees include relevant expertise within their ranks, and, arguably, this is sufficient to make them trustworthy. That is, even if the condition of scientific competence is fulfilled only to a limited extent, most committees could maybe be located above a reasonable threshold of competence, and so have the scientific competence required for producing trustworthy advisory reports.

Moreover, it seems very demanding, and not immediately sensible, to view all the listed conditions of trustworthiness as equally important and necessary conditions. For instance, while possession of relevant scientific competence and advice in accordance with state of the art research could reasonably be conceived of as necessary conditions for well-placed trust – (i.e. it is ill-advised to trust science advice that is not properly science based) – failing to realize cognitive pluralism in an advisory committee may not in all cases be considered a reason to distrust. A committee on tax policy, say, may benefit from including experts from diverse disciplinary backgrounds. Still, even when this committee is dominated by economists, and there may be reasons to consider this as suboptimal, this may not be a sufficient reason to distrust the committee.

Importantly, some of our spelled out sub-condition are not only not necessary; they need not even improve the trustworthiness of science advice. A case in point is transparency (John Citation2018; see also Chambers Citation2004). Public exposure of science advice deliberations may generate tip toing and excessive caution. For instance, in cases where stakes are high and consequences for some groups are severe, transparency could contribute to making individual experts over-cautious. Openness may undermine efficiency and ability of science advice mechanisms to take the public good into consideration, when there are conflicts between general and special interests. Secrecy may thus enable proper expert deliberations, and it seems misplaced then to conceive of limited openness as generally undermining the trustworthiness of science advice. Of course, this is not to deny that secrecy may be exploited by special interests. Rather, our point is that secrecy may in some cases serve the public good in a way that merits trust. In other cases, for example when experts are hired by industry to question the scientific consensus in order to halt regulatory policies (Oreskes and Conway Citation2011), limited transparency would severely endanger trustworthiness.

Overall then, our identification of a peculiar trustworthiness deficit during an exploration of Norwegian advisory committees, have made us specify the three main conditions of trustworthiness as threshold conditions, distinguish between necessary and more secondary conditions, and recognize how some conditions may in some cases contribute to increased, in others to decreased, trustworthiness. Even if the trustworthiness deficits of this committee system should be taken seriously, these clarifications and revisions should be taken into account as well. In our case, this may ease our worries, for example concerning limitation on transparency, as transparency is both a secondary concern and not always conductive to trustworthiness. However, our considerations may also intensify worries, for example in the case of committees where it is not clear that relevant scientific expertise is present – a necessary condition of trustworthy science advice – and there are strong interest groups around the committee table, while peer review procedures are not in place. Arguably, committees with such a worrisome combination of features – committees that in our case are considerably less common now than during the 1970s and 80s – fall beneath the minimal realization of our condition set needed for trust to be well-placed.

iii) Institutional and Social Context is Relevant for Assessments of the Trustworthiness of Science Advice Mechanisms

Finally, reflecting on our case exploration, the third lesson we draw is that institutional and social context matter more fundamentally for assessments of when trust in science advice mechanisms is well-placed. Our selected science advice mechanism is situated in a context characterized by high levels of trust, and in a political and administrative system of comparatively high quality and with many achievements. Arguably, this may be a separate reason for regarding Norwegian advisory committees as trustworthy. It is well-known how de facto trust in science advice depends on trust in the wider social and institutional setting (Greiling Citation2014).Footnote8 However, it seems sensible to similarly regard the trustworthiness of science advice as conditioned on the achievements and trustworthiness of the broader set of public institutions. Also, this seems to be no less the case when the environment is at the other end of the spectrum; distrusting and dysfunctional: Trust in a science advice mechanism of a poorly performing government and in contexts where social and institutional trust generally is ill-warranted, seem to be less well-founded, even when scientific experts are well reputed and show good moral judgment, and the mechanism in question is independent and transparent. This is so because science advice is more likely to have a more stable and robust existence and organization in a well-functioning governing system and trusting environment where adequate resources are available and rule of law and good regulatory governance are in place, than in a in a more poorly performing system, not to mention in a corrupt or failed state. However, the advice of scientific experts is also likely to be handled with greater care and respect, and more likely to be implemented soundly and efficiently in settings of high trust, well-functioning institutions, than in contexts of bad governance and where trust is overall ill-placed.

That is, the trustworthiness of the broader institutional set-up in which a science advice mechanisms is embedded, is, we believe, a candidate for a forth main condition for trustworthy science advice mechanisms. Once more, this addition will not make the worries concerning the trustworthiness deficits of the Norwegian advisory committees disappear. However, arguably, our worries are put into context and proportion.

Conclusion

In this paper, we have conceptualized trust in science advice mechanisms as a kind of institutional epistemic trust, and argued, based on a review of existing literature, that possession of relevant scientific expertise, justified moral and political considerations, as well as proper institutional design are the three main conditions for trustworthy science advice. Assessing these conditions further, we explored the case of temporary advisory committees in Norway, a trusted and seemingly well-functioning science advice mechanism, which however turned out to poorly realize some of the proposed trustworthiness conditions. Based on this exploration, we then draw three lessons. Firstly, that it remains crucial to distinguish between well-placed and de facto trust; secondly, that some conditions of trustworthy science advice seem more significant than others and that there are thresholds; and thirdly, that the trustworthiness of the broader institutional and social context matters. Overall then, the trustworthiness deficits of the Norwegian advisory committees are less pronounced than they seem at a first glance, even if they are substantive and need to be taken seriously.

Our discussions have some limitations. Our review of relevant literature is arguably less than complete. A broader review could have included contributions from public policy, political theory and science and technology studies, and dug deeper into discussions on deliberative democracy. We have explored and assessed Norwegian advisory committees in a way that is sufficient for our purposes, but what we have provided is not a full-fledged presentation and discussion. We have suggested a set of conditions of trustworthy science advice, and offered clarifications and amendments, but we have had to leave side several more detailed issues. Importantly, we do not purport to resolve trade-offs that will occur in concrete cases when conditions of trustworthiness cannot all be fulfilled or fulfilled equally at the same time.

Still, we believe or intervention has been worthwhile and added value. Hopefully, it can spur further investigations, in particular along two paths. First, there is need for more thorough discussion of the status of the conditions of trustworthy science advice. Our suggestion of thresholds and distinction between more or less central conditions provide only a beginning. Secondly, philosophical discussions of trust and trustworthiness of science and science advice should relate more to institutional factors and context. Not only does the institutional design and organization of science advice matter more than often recognized; the trust and trustworthiness of the broader social and political context and institutional environment make a difference as well. In particular, the previous philosophical discussions that have tended to focus on scientific fields characterized by high levels of distrust, such as climate science, evolutionary theory, and technologies such as GMOs or vaccines (see for instance Goldenberg Citation2021; Kitcher Citation2011), often in a US context, or in a context of international expert bodies (see for instance Oppenheimer et al. Citation2019; Oreskes Citation2019), should be supplemented by inquiries informed by studies and examples from other national contexts, including more trusting environments.

Acknowledgements

We would like to thank the anonymous reviewers and the editors of this special issue, Ty Branch and Gloria Origgi, as well as Jack Wright for valuable comments. We have also benefited from comments by participants at the PERITIA online workshop Social indicators of trust, the POW (Politics, organization and work) seminar at Department of Sociology and Human Geography at the University of Oslo, a seminar at the Centre for Philosophy and the Sciences (CPS) at the University of Oslo, and a workshop organized by the GOODPOL project at the Centre for Advanced Study in Oslo. We would also like to thank Philip Kitcher, who provided very helpful comments at a seminar organized by the Centre for the Study of Professions, Oslo Metropolitan University.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 870883. The information and opinions are those of the authors and do not necessarily reflect the opinion of the European Commission.

Notes

1. These may very well be experts as well. Experts are those who possess more valid knowledge in some domain than most others (see Goldman Citation2001 for a more precise elaboration), and such experts need not be scientists (Grundmann Citation2016). However, in this paper we focus on science advice, and, accordingly, on scientific expertise. In some passages we therefore allow ourselves to use “experts” or “expertise” as shorthand for “scientific experts” or “scientific expertise”.

2. Our notion of science advice here does not include the many ways in which scientific expertise informs policymaking as a part of the bureaucracy in ministries, directorates, agencies and similar bodies.

3. For an interesting examination of the more free-wheeling role of experts, see (Origgi, Branch-Smith, and Morisseau Citation2021).

4. Even in cases where policymakers seek advice from one scientific advisor, the nature of this relation is not best described as an interpersonal relation, but as a relation between role-holders and institutions.

5. Indicated by recently conducted interview studies with government officials, scientists, and civil society and interest group representatives.

6. A recent survey conducted on behalf of The Research Council of Norway indicates that there is a high level of trust in science among Norwegian citizens (for report in Norwegian, see: https://www.forskningsradet.no/om-forskningsradet/forskningsradets-holdningsundersokelser/).

7. Another example is that that the compliance with scientific consensus might conflict with the condition of cognitive pluralism. For instance, to include fringe scientists might undermine the prospects of compliance with the scientific consensus while enhancing pluralism. We thank our reviewers for pointing out this.

8. For example, as Goldenberg (Citation2021) argues, the lack of trust in vaccines is mainly tied to a lack of trust in the scientific institutions and the institutions that are responsible for informing and recommending vaccination policies.

References

  • Almassi, B. 2012. “Climate Change, Epistemic Trust, and Expert Trustworthiness.” Ethics & The Environment 17 (2): 29–49. doi:10.2979/ethicsenviro.17.2.29.
  • Arter, D. 2008. Scandinavian Politics Today. Manchester: Manchester University Press.
  • Bågenholm, A., M. Bauhr, M. Grimes, and B. Rothstein. 2021. The Oxford Handbook of the Quality of Government. Oxford: Oxford University Press.
  • Baier, A. 1986. “Trust and Antitrust.” Ethics 96 (2): 231–260. doi:10.1086/292745.
  • Chambers, C. 2004. “Behind Closed Doors: Publicity, Secrecy, and the Quality of Deliberation.” The Journal of Political Philosophy 12 (4): 389–410. doi:10.1111/j.1467-9760.2004.00206.x.
  • Chambers, S. 2017. “Balancing Epistemic Quality and Equal Participation in a System Approach to Deliberative Democracy.” Social Epistemology 31 (3): 266–276. doi:10.1080/02691728.2017.1317867.
  • Christensen, J., and C. Holst. 2017. “Advisory Commissions, Academic Expertise and Democratic Legitimacy: The Case of Norway.” Science & Public Policy 44 (6): 821–833. doi:10.1093/scipol/scx016.
  • Christensen, J. 2018. “Economic Knowledge and the Scientization of Policy Advice.” Policy Sciences 51 (3): 291–311. doi:10.1007/s11077-018-9316-6.
  • Christensen, J., and S. Hesstvedt. 2019. “Expertization or Greater Representation? Evidence from Norwegian Advisory Commissions.” European Politics and Society 20 (1): 83–100.
  • Christiano, T. 2012. “Rational Deliberation Among Experts and Citizens.” In Deliberative Systems: Deliberative Democracy at the Large Scale, edited by J. Parkinson and J. Mansbridge, 27–51. Cambridge: Cambridge University Press.
  • Collins, H. M., and R. Evans. 2002. “The Third Wave of Science Studies: Studies of Expertise and Experience.” Social Studies of Science 32 (2): 235–296. doi:10.1177/0306312702032002003.
  • Douglas, H. 2000. “Inductive Risk and Values in Science.” Philosophy of Science 67 (4): 559–579. doi:10.1086/392855.
  • Douglas, H. 2009. Science, Policy, and the Value-Free Ideal. Pittsburgh, PA: University of Pittsburgh Press.
  • Dunlap, R. E., and A. M. McCright. 2011. “Organized Climate Change Denial.” In The Oxford Handbook of Climate Change and Society. Vol. 1, edited by J. S. Dryzek, R. B. Norgaard, and D. Schlosberg, 144–160. Oxford: Oxford.
  • Elliott, K. C. 2020. “A Taxonomy of Transparency in Science.” Canadian Journal of Philosophy 1–14. doi:10.1017/can.2020.21.
  • Engelstad, F., C. Holst, & G. Aakvaag, eds. 2018. Democratic State and Democratic Society. Berlin: De Gruyter Open.
  • Eriksen, A. 2020. “The Political Literacy of Experts.” Ratio Juris 33 (1): 82–97. doi:10.1111/raju.12269.
  • Friedman, P. J. 2002. “The Impact of Conflict of Interest on Trust in Science.” Science and Engineering Ethics 8 (3): 413–420. doi:10.1007/s11948-002-0063-9.
  • Goldenberg, M. J. 2021. Vaccine Hesitancy: Public Trust, Expertise, and the War on Science. Pittsburgh, PA: University of Pittsburgh Press.
  • Goldman, A. I. 2001. “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63 (1): 85–110. doi:10.1111/j.1933-1592.2001.tb00093.x.
  • Greiling, D. 2014. “Accountability and Trust.” In The Oxford Handbook of Public Accountability, edited by M. Bovens, R. E. Goodin, and T. Schillemans, 617–631. Oxford: Oxford University Press.
  • Grundmann, R. 2016. “The Problem of Expertise in Knowledge Societies.” Minerva 55 (1): 25–48. doi:10.1007/s11024-016-9308-7.
  • Gundersen, T. 2018. “Scientists as Experts: A Distinct Role?” Studies in History and Philosophy of Science Part A 69: 52–59. doi:10.1016/j.shpsa.2018.02.006.
  • Hardin, R. 2006. Trust. Cambridge: Polity.
  • Hesstvedt, S. 2021. “The Politics of Expert Advice.” PhD dissertation. University of Oslo.
  • Hesstvedt, S., and J. Christensen. 2021. “Political and Administrative Control of Expert Groups—a Mixed‐methods Study.” Governance, March 30. doi:10.1111/gove.12599.
  • Holst, C., and A. Molander. 2017. “Public Deliberation and the Fact of Expertise: Making Experts Accountable.” Social Epistemology 31 (3): 235–250. doi:10.1080/02691728.2017.1317865.
  • Holst, C., and A. Molander. 2019. “Epistemic Democracy and the Role of Experts.” Contemporary Political Theory 18 (1): 541–561. doi:10.1057/s41296-018-00299-4.
  • Holst, C., H. L. Kjos, E. Krick, and T. Gundersen. 2021. “Science Advice Mechanisms: Varieties and Reform.” PEriTiA White Paper. https://ec.europa.eu/research/participants/documents/downloadPublic?documentIds=080166e5dc61d3b2&appId=PPGMS .
  • Irzik, G., and F. Kurtulmus. 2019. “What is Epistemic Public Trust in Science?” The British Journal for the Philosophy of Science 70 (4): 1145–1166. doi:10.1093/bjps/axy007.
  • John, S. 2018. “Epistemic Trust and the Ethics of Science Communication: Against Transparency, Openness, Sincerity and Honesty.” Social Epistemology 32 (2): 75–87. doi:10.1080/02691728.2017.1410864.
  • Kitcher, P. 2011. Science in a Democratic Society. Amherst, Prometheus Books.
  • Krick, E. 2015. “Negotiated Expertise in Policy-Making. How Governments Use Hybrid Advisory Committees.” Science & Public Policy 42 (4): 487–500. doi:10.1093/scipol/scu069.
  • Krick, E., J. Christensen, and C. Holst. 2018. “Between ‘Scientisation’ and a ‘Participatory Turn’. Tracing Shifts in the Governance of Policy Advice.” Science & Public Policy 46 (6): 927–939. doi:10.1093/scipol/scz040.
  • Martini, C. 2014. “Experts in Science: A View from the Trenches.” Synthese 191 (1): 3–15. doi:10.1007/s11229-013-0321-1.
  • Meijer, H. 1969. “Bureaucracy and Policy Formulation in Sweden.” Scandinavian Political Studies 4 (A4): 103–116. doi:10.1111/j.1467-9477.1969.tb00522.x.
  • Moore, A. 2017. Critical Elitism. Deliberation, Democracy, and the Problem of Expertise. Cambridge: Cambridge University Press.
  • O’-Neill, O. 2018. “Linking Trust to Trustworthiness.” International Journal of Philosophical Studies 26 (2): 293–300. doi:10.1080/09672559.2018.1454637.
  • Oppenheimer, M., N. Oreskes, D. Jamieson, K. Brysse, J. O’-Reilly, M. Shindell, and M. Wazeck. 2019. Discerning Experts. Chicago: University of Chicago Press.
  • Oreskes, N., and E. M. Conway. 2011. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York: Bloomsbury Publishing USA.
  • Oreskes, N. 2019. Why Trust Science?. Princeton: Princeton University Press.
  • Origgi, G., T. Branch-Smith, and T. Morisseau. 2021. “Trust, Expertise and the Controversy over Chloroquine.” Social Epistemology a Journal of Knowledge, Culture and Policy.
  • Owens, S. 2015. Knowledge, Policy, and Expertise: The UK Royal Commission on Environmental Pollution 1970–2011. Oxford: Oxford University Press.
  • Rolin, K. H. 2020. “Trust in Science.” In The Routledge Handbook of Trust and Philosophy, edited by J. Simon, 354–366. New York: Routledge.
  • Rolin, K. H. 2021. “Objectivity, Trust and Social Responsibility.” Synthese 199 (1): 513-533.
  • Rothstein, B. 2021. Controlling Corruption: The Social Contract Approach. Oxford: Oxford University Press.
  • SAPEA. 2019. “Making Sense of Science for Policy Under Conditions of Complexity and Uncertainty.” Evidence Review Report No. 6.
  • Schroeder, S. A. 2020. “Democratic Values: A Better Foundation for Public Trust in Science.” The British Journal for the Philosophy of Science 72 (2): 545–562.
  • Tellmann, S. M. 2016. “Experts in Public Policymaking: Influential, yet Constrained.” PhD. Dissertation, Oslo and Akershus University College of Applied Sciences.
  • Turner, S. 2014. “What is the Problem with Experts?” Social Studies of Science 31 (1) (2001): 123–149. doi:10.1177/030631201031001007.
  • Wolff, J. 2019. Ethics and Public Policy: A Philosophical Inquiry. London: Routledge.
  • Wright, J. 2019. “Pluralism and Social Epistemology in Economics.” PhD. Dissertation, University of Cambridge.