1,482
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

Responsible innovation in the age of science conspiracism

ORCID Icon &
Pages 398-418 | Received 06 Mar 2021, Accepted 19 Aug 2022, Published online: 12 Oct 2022

ABSTRACT

Responsible innovation is centered around the ideal that societal stakeholders are entitled to participate in scientific and technological decision-making by voicing their needs and worries. Individuals who believe in science conspiracies (referred to here as ‘science conspiracists’) pose a challenge to implementing this ideal because it is not clear under what conditions their inclusion in responsible innovation exercises is possible and advisable. Yet precisely because of this uncertain status, science conspiracists constitute an instructive case in point to travel towards the edges of inclusion and understand how we draw the line between ‘includables' and ‘unincludables’. In this paper, we seek to explore this relationship between responsible innovation and science conspiracism by using the method of thought experimentaiton. We test four possible exclusion criteria for science conspiracists. We conclude by revisiting the relationship between conspiracism and responsible innovation and sketching a novel perspective on the ideal of stakeholder inclusion.

Introduction

The inclusion of stakeholders in scientific and technological decision-making lies at the heart of many contemporary approaches to science governance such as responsible research and innovation (van den Hoven et al. Citation2014; Owen et al. Citation2013; Grunwald Citation2011), technology assessment in its various versions (Grunwald Citation2009; Hellstrom Citation2003), ethics of technology (Hansson Citation2017; Groves Citation2009) and, perhaps most explicitly of them all, public engagement with science (Selin et al. Citation2017; Wilsdon and Willis Citation2004; Stilgoe, Lock, and Wilsdon Citation2014). There are many differences between all these approaches in the conceptualization of relevant stakeholders, the sought degree of inclusion, the reasons for inclusion, methods employed, and expected outcomes. Yet despite this variation, the aforementioned approaches share an emphasis on stakeholder inclusion.

Scholars and practitioners working on stakeholder inclusion have been well aware of the need to include a diversity of voices in innovation, with particular attention to critical voices (Genus and Iskandarova Citation2018; de Bakker et al. Citation2014; Stilgoe, Lock, and Wilsdon Citation2014; Selin et al. Citation2017). Without the inclusion of critical voices, the practice of responsible innovation would be nothing more than window-dressing for societal agreements that are reached before the discussion even starts. But how critical can these critical voices be before the process breaks down? On the one hand, some have suggested that there must be something like an ‘optimal cognitive distance’ between the stakeholders involved if the inclusion is to be meaningful and efficient (Cuppen Citation2012; Nooteboom et al. Citation2007). On the other hand, others have highlighted the advantages of bringing together stakeholders that are separated by fundamental differences in worldviews, language and attitude towards innovation and are thus highly critical of each other (Blok Citation2014, Citation2019).

In the context of these discussions of methodology, we believe it is expedient to take into consideration a group of stakeholders that at least prima facie poses a novel set of problems for responsible innovation: science conspiracists. We take this term to refer to individuals who believe that specific scientific developments (such as technologies, theories, innovation products and processes) are the result of a group of individuals who conspire to the detriment of others. For example, if the science in question is the COVID-19 vaccine, then the science conspiracist will believe in (and perhaps try to convince others of) the theory that her subgroup is used as a guinea pig for some experiments or that the vaccine is only employed to somehow control the population (Pummerer et al. Citation2020; Marques et al. Citation2021). Other well-known cases of science conspiracism are the AIDS conspiracist movement (Kalichman Citation2014), the climate change conspiracists (Dunlap and McCright Citation2010) and more recently the 5G conspiracists (Cerulus Citation2020) Notice that the conspiracy must perceived as occurring ‘to the detriment of others’, otherwise it is not clear that the scientific development in question can be said to be part of a conspiracy. If the activities of the individuals are good and seen as such, they will not count as a conspiracy. As we will later discuss (see Section ‘Science conspiracism’), the semantic boundaries of the expression are not always easy to establish. Science conspiracists are in any case a sub-group of conspiracists generally speaking which include those who hold beliefs regarding conspiracies that are formed in the political or public arena.

Science conspiracists pose a special challenge to the ideal of inclusion as developed in the field of responsible innovation. Not only do they oppose a certain scientific theory, technology, innovation process or product, but they additionally believe that the mainstream narrative around that development – what is being said about it on the news, in standard newspapers and in political speeches – is essentially intended by some to deceive. This is problematic because responsible innovation exercises are typically carried out within, and under the assumptions of, mainstream narratives. In the game of responsible innovation, we can allow proponents and opponents of a certain innovation process or product, but what do we do with those who believe that the game is rigged? After all, ‘plots, schemes and conspiracies imply some kind of agency which is preventing us from discovering the truth, from connecting events and causes in a correct manner’ (Parker Citation2000, 194). Applying the philosophy and theory of responsible innovation (van den Hoven et al. Citation2014; Koops et al. Citation2015; Asveld Citation2017) to science conspiracists is an opportunity to reflect on the limits of the ideal of inclusion in responsible innovation. We believe that the COVID-19 pandemic has increased the ‘need to know more about fatalism with respect to science governance and disenchantment about engagement’ (Stilgoe, Lock, and Wilsdon Citation2014, 7).

In this paper, we take the case of science conspiracism as a case in point and travel to the edges of stakeholder inclusion. Methodologically, we propose to explore the case of science conspiracism by means of thought experimentation (Popa Citation2016; Brown Citation2011). The method of thought experimentation is useful in this situation because through it we can test various proposals as to the best approach regarding science conspiracism from a responsible innovation perspective. These proposals are tested not against empirical data (as in an actual laboratory experiment) but against our theories, intuitions, philosophical assumptions, practical knowledge etc. Our paper is organized as follows.

In Section ‘The peculiar challenge of science conspiracism’, we introduce the notion of science conspiracism and we explain why this particular stakeholder group poses a peculiar challenge to the field of responsible innovation. In Section ‘A method for testing deal-breakers’, we explain our method for dealing with potential ‘deal-breakers’, that is, criteria that can reasonably be expected to exclude science conspiracists from responsible innovation exercises. These criteria are: (i) fraud, (ii) anonymity, (iii) irrelevance and (iv) unresponsiveness or unfalsifiability. In Section ‘Thought experiments on stakeholder inclusion’, we deploy our thought experiments in order to check the plausibility of these criteria against imaginary scenario’s where we can tinker with particular variables that feed into the inclusion/exclusion decision-making process. In Section ‘Conclusion’, we conclude by explaining the consequences of the considerations from the previous section and outlining a perspective from which contact with science conspiracists can be both instructive and even inspiring.

The peculiar challenge of science conspiracism

Science conspiracism

Before moving forward, it is important to highlight some definitional issues surrounding the terms ‘science conspiracist’ and ‘science conspiracism’. As mentioned in the introduction, the term ‘science conspiracist’ refers here to individuals who believe that specific scientific developments (such as technologies, theories, innovation products and processes) are the result of a group of individuals who conspire to the detriment of others. Accordingly, ‘science conspiracism’ refers to the social phenomenon of science conspiracists forming the beliefs in question and acting upon them. In using this definition, we aim to cast our net as widely as possible, including all stakeholders that satisfy this definition and employing the term in a normatively neutral way regarding the rationality of the mentioned belief.

The terms ‘conspiracist’, ‘conspiracy theory’ and ‘conspiracism’ have triggered a long and as yet unfinished philosophical discussion regarding their definitions, connotations and implications (see Dentith Citation2014, 7–23; Citation2018; Coady Citation2006). One of the most prominent issues is that the term ‘conspiracist’ and its derivatives carry negative connotations. The suggestion is often that the belief in questions are unwarranted or in some other sense irrational: ‘believers in a conspiracy-based explanation are often labelled lunatics, kooks or paranoiacs’ (Byford Citation2011, 120). These negative connotations arise even though ‘conspiracy theories, as a general category, are not necessarily wrong’ (Keeley Citation2006, 46) and they ‘are not necessarily unwarranted’ (Keeley Citation2006, 60). Although these are critical issues to consider in an epistemological discussion on questions of rationality, in this paper we want to focus on our ethical response to science conspiracism – specifically, what the relationship is between our ideals of inclusive innovation and science conspiracists. Our relatively broad definition might satisfy some scholars and upset others, but since we are exclusively interested in questions pertaining to inclusion and the democratization of the scientific process, sidelining the epistemological discussion seems expedient and perhaps necessary. We are only interested in the relationship between the ethical requirement of inclusion as developed in responsible innovation (e.g. Stilgoe, Owen, and Macnaghten Citation2013) and the group as defined above. The reason we are interested in this relationship is that it seems to constitute a blind spot in current narratives of responsible innovation; there is general agreement that we should include ‘citizens’ in science and innovation, but it is not always clear who falls in and out of this broad category of includable citizens (see further elaboration on this point in Section ‘Science conspiracism meets responsible innovation’). For the present purposes, therefore, epistemological questions can be temporarily sidelined, and a broader definition will be more expedient.

There have been examples of science conspiracism in the past, but the COVID-19 pandemic provides the most recent and, given the amplified visibility on social media, the clearest case of conspiracism in general and science conspiracism in particular. The hashtag #FilmYourHospital was used on Twitter for user footage of hospitals or testing locations that were deemed too empty or too calm, purportedly showing that the COVID-19 pandemic was nothing but a hoax.Footnote1 In one of these videos, a man is filming a relatively inactive testing site and repeatedly asks: ‘Where are all the sick people?’.Footnote2 Examples of graffiti with messages such as ‘COVID = hoax' and ‘Covid is a hoax 5G is the killer’ started to appear in the United States and the United Kingdom. The far-right newspaper The Epoch Times sustained a global promotion campaign for the theory that the COVID-19 virus was engineered in a laboratory in Wuhan, China (Bellemare, Ho, and Nicholson Citation2020). But perhaps the most telling example of science conspiracism in the case of COVID-19 case is the 26-minute documentary Plandemic: The Hidden Agenda behind Covid that became viral in May 2020 (Frenkel, Decker, and Alba Citation2020). The considerable number of claims advanced in this documentary have sparked an immediate debate and a ban from social media platforms such as Facebook and YouTube. They were all variations on the same theme: what mainstream science says is the case about the origin of the virus, possible cures, and prevention is essentially a smoke screen designed to further the interest of malevolent actors such as the pharmaceutical industry or the Chinese government.

Science conspiracists have a prima facie legitimacy as stakeholders in the broader picture of science governance. Unlike groups animated by racism, xenophobia, homophobia and other such ideologies that contradict fundamental (liberal) values, science conspiracists labor on assumptions that we can easily recognize such as truth, protection, justice, and freedom. Of course, it can happen that some science conspiracists entertain other beliefs aside from their conspiracism and that these ‘ancillary’ beliefs place them outside the liberal spectrum, but science conspiracism as such does not necessarily contradict the core of our governance systems (we maintain the expression prima facie because our definition of science conspiracists as given above also involves acting upon the beliefs in question and these acts might themselves result in their forfeiting this prima facie includability).

Not only are science conspiracists aligned with a set of core values that we cannot give up (not even in the name of diversity and pluralism), but their claims also turn out to be, at least sometimes, correct. As it happens, science conspiracies do sometimes occur. A well-known example in this case is the scientific discoveries that concluded with the global use of lead in petrol (Kovarik Citation2005; Gee Citation2001). It is worth going through the main elements of this conspiracy theory not only to understand its components but also to see that the counter-narrative is, at times, the correct one. In the rise and fall of tetraethyl lead, all the elements typically associated with science conspiracism are present:

  • - Initially, the technology presented as the great achievement of science: In 1923 tetraethyl lead was presented to the public as a miracle solution against car engines ‘knocking’ (stuttering and jerking while ignited).

  • - Knowledge and disregard of dangers and alternatives: even though scientists were well aware of the public health risks (and somewhat aware of the environmental ones), the dangers were repeatedly denied before the public. Furthermore, the existence of a perfectly viable and safe alternative was denied even though it was already in use in Europe and Latin America (Kovarik Citation2005, 387).

  • - Economic interests: The alternative in question, namely ethanol, was neither patentable nor profitable.

  • - Small group of powerful conspirators: The Lead Corporation was founded in 1924 by General Motors, DuPont and Standard Oil (now Exxon/Mobil) and was sold only after increased controversy over lead in the late 1960s.

  • - Conspirators suppress opponents: The companies and the related associations repeatedly used their standing in the United States to either suppress contrary evidence or present their own research as scientific evidence (Needleman and Gee Citation2013). When in the early 1920s factory workers that handled lead were experiencing mental syndromes at an alarming rate, producers insisted that the deaths and insanity were ‘caused by heedlessness of workers in failing to follow instructions’ (quoted in Kovarik Citation2005, 387).

  • - Catastrophic results: Although the exact effect of lead on public health and environment is difficult to measure – given the large number of cofounding variables – leaded gasoline has been linked to IQ deficiency and congenital malformations in children, hypertension, immunotoxicity as well as various types of cancer (Menkes and Fawcett Citation1997; Goyer Citation1990).

If you were a conspiracy theorist in the late 50s claiming that the economic interests of the few are behind tetraethyl lead and that ‘the real science’ is being suppressed, you would have been completely right. Thus, starting from this prototypical example, we might observe with some unease that most cases of grossly underestimated public health hazards fall within the category of conspiracy theories. New inventions such as DDT, Bisphenol A, Beryllium, asbestos, benzene, halocarbons, the DES drug, PEC, DBCP and many that we might not know of today have a similar history of ‘early warnings’ being rejected by a relatively small group of experts (Harremoës et al. Citation2013; Gee Citation2001) with some such as DDT also being the subject of science conspiracism (Offit Citation2017).

We, therefore, take this as the starting point of our investigations: science conspiracists believe that specific scientific theories, developments and innovations are the subject of a conspiracy that works to their detriment. Science conspiracists are prima facie legitimate stakeholders of science governance and thus of responsible innovation, first because their views are in line with our core, fundamental values and second because their views are sometimes correct.

Science conspiracism meets responsible innovation

Science conspiracism poses a particular challenge to the idea that responsibility involves, inter alia, the inclusion of stakeholders in science and innovation (Owen et al. Citation2013). According to some theorists, conspiracists believe not only that a specific prevailing scientific paradigm is false but also exhibit a certain ‘disdain for orthodoxy’ which places them outside the system of ‘institutions that conventionally distinguish between knowledge and error: universities, communities of scientific researchers, and the like’ (Barkun Citation2003, 26). This view might be hasty generalization that doesn’t hold for all conspiracists out there, but it does immediately bring to light the challenge that science conspiracists bring to responsible innovation. The challenge is this: neither responsible innovation nor conspiracism seem to be able to ‘meet’ in the arena of public engagement without some loss of identity. Let us elaborate.

First, from the perspective of responsible innovation, the integration of science conspiracists within responsible innovation exercises (ignoring the details of those exercises for now) would assume taking seriously the conspiracists’ claims that the exercises in question might be designed to deceive – at least when they pertain to the technology they contest. The inclusion begins off the gate with an impasse. Because if science conspiracists become part of the game, then the game organizers must take seriously the claim that the game itself could be rigged to such an extent as to cast doubt on the good faith of the inclusion process. Can the game be played with someone holding these beliefs? Second, from the perspective of science conspiracists themselves, is that involvement in the responsible innovation exercise would mean a (partial) loss of identity. The conspiracist’s conflict with the science in question is much more complex than the conflict between, e.g. those who favor a technology and those who oppose it, or those who believe in the truth of a theory and those who believe in its falsity. Participating in what from their point of view must appear as precisely such a machination would mean at least partially giving up their conspiracist identity.

Underlying these practical issues there is a more fundamental need to turn our public engagement tools toward the more critical publics, a need that has been voiced in a variety of ways for decades (Wynne Citation2007; Goodin and Dryzek Citation2006; Felt and Fochler Citation2010). Indeed, some exercises in public engagement have sought to absorb critique and even actively seek it (Selin et al. Citation2017; Irwin, Jensen, and Jones Citation2012). Yet even such discussions seem to be primarily concerned with what might be called mild sentiments against this or that technology. The disagreements that surface during such exercises are not fundamental clashes between science and its alternative narratives, nor do they seriously include the possibility that, as in the tetraethyl lead example, the main narrative is essentially a cover for larger interests. The classic responsible innovation exercise is simply not attuned to integrate such extreme views. From the selection of stakeholders to the actual involvement exercises, the process is shielded against deep disagreements between fundamentally different worldviews (Blok Citation2019). A public engagement exercise can concern the research agenda for creating the vaccine, its deployment, the funding and its allocation and other such matters of science governance. The reflective public engagement expert might even dare to invite some stakeholders that stand in opposition to COVID-19 vaccination and the truly daring expert might even invite some stakeholders whose main narrative is that we should not vaccinate in any situation (also known as ‘anti-vaxxers’). But even with such an exceptionally wide spectrum, one has yet to turn one’s tools towards science conspiracists, who espouse a different kind of opposition to the standpoints that are up for discussion during such exercises. But here the challenge becomes pertinent again, for the question might be asked under what condition, given their specific opposition, science conspiracists are to be included in the process. They are, as explained, prima facie includable, but under what conditions and what are the deal-breakers?

A method for testing deal-breakers

Thought experimentation is often used in science and philosophy to test ideas in the ‘laboratory of the mind’ when actual testing would bring exceeding moral or economic risks (Brown Citation2011; Stuart, Fehige, and Brown Citation2018). Some thought experiments such as Thomson’s Violinist, Searle’s Chinese room and Schrodinger’s Cat have triggered extended discussions in moral philosophy, cognitive sciences and physics respectively and are also well-known outside academia. Thought experiments have a recognizable structure (Popa Citation2016). The thought experimenter starts with a thesis and constructs an imaginary scenario in which the acceptability of that principle or thesis is becomes questionable (or in any case more questionable than it seemed in the beginning). We can exemplify this structure by looking at the thought experiment known as Thomson’s Violinist. In her famous paper A defense of abortion, Thomson constructs the following scenario as a means to test the thesis that abortion is morally unjustified because the fetus is a person (Thomson Citation1971)

So the fetus may not be killed; an abortion may not be performed. It sounds plausible. But now let me ask you to imagine this. You wake up in the morning and find yourself back-to-back in bed with an unconscious violinist. A famous unconscious violinist. He has been found to have a fatal kidney ailment, and the Society of Music Lovers has canvassed all the available medical records and found that you alone have the right blood type to help. They have therefore kidnapped you, and last night the violinist's circulatory system was plugged into yours, so that your kidneys can be used to extract poisons from his blood as well as your own. The director of the hospital now tells you, ‘Look, we're sorry the Society of Music Lovers did this to you-we would never have permitted it if we had known. But still, they did it, and the violinist now is plugged into you. To unplug you would be to kill him. But never mind, it's only for nine months. By then he will have recovered from his ailment, and can safely be unplugged from you.’ Is it morally incumbent on you to accede to this situation? No doubt it would be very nice of you if you did, a great kindness. But do you have to accede to it?

Thomson’s example illustrates the standard steps of thought experimentation. First, the author posits a thesis that has some degree of plausibility. In most cases, the thesis is thought to be true or acceptable by other scholars or more generally by society. Second, as if creating an experimental set up, the author describes an imaginary scenario. The imaginary scenario is not meant to be plausible or even probable. In fact, it typically describes a situation that stretches our imagination of what is logically possible. In some thought experiments, people travel to the edges of the universe (Lucretius) or follow a lightning bolt (Einstein), they perform computational tasks incredibly fast (Searle), or they discover their doppelganger (Putnam). These stories are not meant to be believed; they are merely testing environments. Then in a third step, the author shows that in the described scenario, the initial claim loses its plausibility. In Thomson’s case, the initial plausibility of the immorality of abortion loses plausibility when, prompted by the scenario, we take into consideration the immoral situation in which the mother finds herself. The method of thought experimentation is thus a method of stretching our intuition to allow testing that could otherwise not occur for practical or ethical reasons (Stuart, Fehige, and Brown Citation2018).

But thought experiments do more than just falsify initially plausible claims in the laboratory of the mind. Important for the present purposes is that thought experiments are discussion starters, meaning that they are meant to trigger discussion of claims, ideas and perspectives that were otherwise lying dormant due to the broad agreement that surrounds them. Due to this communicative function as discussion starters, they resemble the use of scenarios in various public engagement methods such as the ‘value scenario’ in value-sensitive design (Friedman and Hendry Citation2019) or the ‘socio-technical scenario’ in constructive technology assessment (Rip and Te Kulve Citation2008). There is, however, a significant difference. Given their predictive and anticipatory function, value scenarios and socio-technical scenarios need to be realistic given current knowledge of an innovation system. In this standard use of scenarios, participants envision long-term developments as well as their hopes and fears regarding these developments. But the entire exercise needs to stay fairly close within the realm of the explicable. In contrast to this, the method of thought experimentation does not have any predictive ambitions and can therefore work with extreme and highly improbable scenarios (think of the situation in which Thomson’s Violinist finds himself). The predictive function of scenarios used in anticipatory methods imposes an inevitable restriction on the plausibility and traceability of the resulting story. The communicative function of scenarios used in thought experimentation, on the contrary, eschews the question of plausibility and explicability, creating a discussion environment in which conceptual and ontological issues are brought to light.

Thought experiments on stakeholder inclusion

In this section, we put forward four thought experiments in which we explore several possible deal-breakers that would plausibly justify excluding conspiracists from responsible innovation exercises (or science governance more generally). These criteria have an initial plausible ring to them, so it is worthwhile to test them in the laboratory of our mind. All four thought experiments have the following scenario – in each, we imagine a group of responsible innovation scholars who consider excluding conspiracists from science and technology governance activities. In all, there is a certain exclusion criterion that we will make explicit in the beginning. This entire exercise will be performed within some limits. That is, we assume that the science conspiracists in question are not disruptive of basic human rights and democratic principles and that they do not have criminal or other clearly immoral intentions. We assume for the purposes of this paper that the exclusion of the conspiracist-turned-xenophobic-tyrant is self-evident. Additionally, we assume that the remaining science conspiracists have a fundamental right to be included in science and technology governance. In any case, to deny this right means to place the burden of proof on the shoulders of our (imaginary) responsible innovation practitioners. All this was explained in the previous sections. There will be, in short, no exclusion of conspiracists simply because of their status as a counterculture or because of their fundamental disbelief in mainstream science with regard to a specific innovation process or product. But if the deal-breakers work, then they might function as a reason for exclusion.

Fraud as a deal-breaker

The first potential deal-breaker is that conspiracists might forfeit their inclusion rights through fraudulent conduct with the aim of convincing. By ‘fraudulent conduct’ we mean manipulated (‘fake’) videos, false information and other verbal and non-verbal ways of gaining assent unrightfully. In fields such as (critical) discourse analysis or argumentation theory, scholars employ various names for such conduct, e.g. ‘unreasonableness’, ‘propaganda’, ‘manipulation’, ‘fallaciousness’ and, of course, the recently popular ‘fake news’ (van Eemeren et al. Citation2014; Boudry, Paglieri, and Pigliucci Citation2015). This type of discourse has come to the fore in connection with a series of manipulated (‘fake’) videos in the summer of 2020 in which 5G antennas were shown to contain components labeled ‘COV-19’. In one video, a man presents himself to be a telecom engineer revealing that antenna components labeled ‘COV-19’. The video fueled already-existing suspicion that the 5G technology was somehow responsible for the global COVID-19 pandemic. The same goes for the Plandemic documentary mentioned in the previous section. While the video was not visually manipulated, the imitation of the documentary genre is evident and it might deceive viewers into accepting the large number of baseless claims regarding the origin and spread of the COVID-19 virus (Funke Citation2020). The case can be made that such unreasonable discursive means exclude their creators and users from the dialogue on science and technology since the dialogue would only be jeopardized by their readiness to appeal to such unorthodox – and morally questionable – means of convincing. But is a fraudulent way of gaining assent sufficient to influence our inclusion of conspiracists in the scientific/technological dialogue?

Of course, there are degrees of fraud in gaining assent, but for our purposes it’s best to make the case a fortiori. If the utmost fraud turns out not to be sufficient for exclusion, then of course ‘milder’ discursive practices are a fortiori safer from exclusion. The question is thus: Is the use of downright fraudulent means for obtaining of adherents to one’s conspiracy theory sufficient to be excluded from dialogue efforts in responsible innovation exercises? Notice that the question touches upon a serious theme in the field of responsible innovation, viz., the required uniformity of discursive norms and practices in the dialogue on science and technology (see also Stilgoe, Lock, and Wilsdon Citation2014). Ultimately, we are asking what the limits of diversity and pluralism are when it comes to discursive norms and practices.

In order to test the claim that fraud is a sufficient reason for exclusion, imagine this. You are a responsible innovation scholar who discovers one day that all candidate citizens to be included in a public engagement exercise on a hotly debated issue – all supporters on one side or the other – are in fact chemically brainwashed such that they can only appeal to fraudulent means of persuasion. Say the issue in question is that of animal well-being. The Society of Animal Lovers, we might imagine, kidnaps the candidate citizens (Thomson-style, see previous section) and brainwashes them such that they defend the standpoint ‘Animal slaughtering should be banned globally and permanently’ exclusively through fraudulent discursive means (fake videos, fallacies, false information etc.). These individuals are so brainwashed that they see such means as perfectly fine and even rational. The Society is so successful that, after a while, all those who defend this viewpoint have in fact been brainwashed. As a responsible innovation scholar, you can only include that particular standpoint by including that kind of discourse and thus allowing fraudulent discursive means into the dialogue. Is exclusion justified?

Excluding these brainwashed candidates from the dialogue on animal slaughtering is perhaps understandable. In fact, one could argue that the exclusion serves the noble purpose of deterring others from such fraudulent behavior – it works as a form of punishment. But through the imaginary scenario we are able – indeed, forced – to separate the irrationality of the beliefs from the irrationality of the discursive means by which those beliefs are acquired and defended. Being in the wrong and being ‘brainwashed’ are not the same thing. The imaginary scenario forces us to ask ourselves whether fraud in acquiring and defending beliefs is by itself sufficient for exclusion. In real life, the use of ‘fake science’ to show that there is a conspiracy – as many groups did around the world in the early months of the pandemic claiming that ‘COVID-19 is a hoax’ (Goodman and Carmichael Citation2020) – is difficult to judge because it typically occurs in the context of an already unlikely or seemingly unwarranted conspiracy claim. In the imaginary scenario, on the contrary, we are forced to focus on the effect of the fraudulent means on our decision-making, leaving aside the acceptability of the defended standpoint. This is not only useful, since we want to isolate the ‘variable’ of fraud in the decision-making process, but it is also a fair imitation of the real-life situation where the truth of statements pertaining to the effects of emerging technologies cannot be established in the beginning (Collingridge Citation1982; Genus and Stirling Citation2018).

Are we then justified in excluding the brainwashed members of the Society of Animal Lovers? The scenario forces us to reconsider the initial intuition and it does so by separating fraud from other confounding variables – i.e. ‘brainwashing’. First, the group appears to defend a prima facie plausible standpoint, so that implausibility cannot spill over and influence our decision. Second, the group yields discursive means that they cannot help but employ – they have been ‘brainwashed’. Third, the group would recognize our evaluation of their discursive means as being unreasonable or downright fraudulent. What this shows is that sanctioned methods of obtaining societal support, viz., what we refer to as ‘rational’ or ‘reasonable’ methods of argumentation, seem to decrease in importance as the innovation process becomes more inclusive and we take into consideration other variables that play a part in drawing the line between includables and unincludables (more on this in the next thought experiments). We must expect newly included groups to bring to the table not simply different views but also different individual histories (brainwashing) and modes of interaction (fraud). Of course, from the perspective of mainstream notions of science and argumentation around science, these histories and modes of interaction are not simply different – they are deviant. But when stripped of all other variables that might be stacked against such a group, inclusion seems painstakingly difficult, but not conceptually impossible. However, conspiracy theories can only be recognized as forming a respectable discussion point in the general democratization of science if those who organize the ensuing dialogue recognize them as valid ‘civil epistemologies’, that is, ‘culturally specific, historically and politically grounded, public knowledge-ways’ (Jasanoff Citation2005, 249). To be sure, chemical brainwashing is patently not a public knowledge-way nor is it an admirable way of obtaining converts. But the imaginary scenario revealed that even such extreme methods of conversion fail to directly invalidate the ethical claim made by that group and the plausibility of their inclusion in responsible innovation exercises. If brainwashing doesn’t seem to cut it, then a fortiori misleading video footage will do so even less.

Anonymity as a deal-breaker

Conspiracists sometimes remain anonymous, or they propagate the theory under false or forgotten identities. Conspiracists in particular and fringe groups more generally prefer non-standard forms of communication: word of mouth rather than public speech, rumor rather than declaration, tongue-in-cheek reference rather than endorsement (Byford Citation2011). This is not to say that institutionalization cannot occur – and the tetraethyl lead example in the previous section shows that this can be conducted by corporations in plain sight. Associations such as 9/11 Truth and Scholars for 9/11 Truth have institutionalized the belief that the September 11 terrorist attacks were the result of one or more conspiracies. More to the topic of science conspiracism, the association known as America’s Frontline Doctors promoted conspiracy theories about the suppression of knowledge around Ivermectin as a treatment against COVID-19 (Bergengruen Citation2021) and The Epoch Times, mentioned in the introduction, promoted the idea that China is behind the entire pandemic. Yet these are by and large exceptional events. There is typically no locus classicus where a diligent responsible innovation scholar can study the original texts of a science conspiracy theory or get acquainted with the paradigmatic arguments. It seems plausible therefore to contend that inclusion of science conspiracists requires flesh-and-blood individuals to champion the science-conspiracist cause relative to a certain innovation and not just the spectral existence of the conspiracy theory itself. The ideal of stakeholder inclusion starts, after all, with the concept of the stakeholder. It seems only natural that someone needs to actually ‘hold’ the ‘stakes’. But imagine this.

You are a responsible innovation scholar who organizes a citizen panel around worldwide use of antibiotics and growth promoters in agriculture (Edqvist and Pedersen Citation2001; Lipsitch, Singer, and Levin Citation2002). Aware of the need to include critical voices, you invite members of the Society of Animal Lovers who claim that the use of antibiotics for animals is the result of a world-wide conspiracy from the big meat industries (who use their power to increase the use of antibiotics in order to produce more meat) and ‘Big Pharma’ (who exploit the situation to sell more antibiotics). The Society’s members individuals concede that antibiotics are probably the best innovation the science of medicine has produced, but they reject its use in agriculture as the result of schemes designed in their view to satisfy economic interests. We further imagine that the Society is not what it used to be. Unbeknown to you, the people you invite in the citizen panel are the only remaining individuals in the world who maintain the beliefs in question. Everyone else has either given up the belief in that particular science conspiracy or has never held the belief to start with. And then this: by some extraordinary coincidence, all those you invite to the citizen panel decease overnight of completely unrelated causes before they get to attend your citizen panel. The only ‘holders’ of this particular ‘stake’ are not among us and the theory survives only as a past, un-institutionalized form of discourse with no one to champion it. (If needed, we might even assume that there is a fail-safe way for you to check that, at the moment of the responsible innovation exercise, there is no one else holding the beliefs in question. You thus know for certain that those particular stakes are not represented by anyone). Are you justified in excluding this ‘faceless’ science conspiracy theory and its arguments from your agenda?

Once more, the plausibility of the theory helps us make a distinction that we are otherwise prone to overlook. We are now forced to make a distinction between the science conspiracy theory and its theorists, between what can be called the social capital and the discursive capital of a certain stakeholder group. The social capital consists of those who believe and champion the cause in question (the individuals themselves or the institutions representing them) whereas the discursive capital consists of the linguistic and non-linguistic artifacts created in the name of those beliefs (a speech, an article, a newsletter). The scenario pushes us to admit that even when the social capital is depleted or unidentifiable, the latter can still prevail and can still constitute an anchoring point for inclusion. The result is surprising: it is not individuals that we are including, not as such, but the arguments and lines of thought that they bring.

In the scenario, a reasonable approach for the responsible innovation scholar is to attempt the inclusion of the group based on its discursive capital. This would mean trying to find and understand the claims that have moved those individuals and make room for those claims in the responsible innovation exercise. If nothing can be found, then the cause is lost, but if the discursive capital outlives its social capital, the stakeholders are still, discursively speaking, very much alive. In normal circumstances, it seems strange to suggest that anonymous conspiracy theories circulating freely on the internet must be taken seriously. Yet the imaginary scenario has shown that the exclusion is difficult to sustain purely based on the lack of social capital. It should perhaps be noted that academics are no strangers to working with groups with unclear or unidentifiable social capital. After all, scholars take claims and counterclaims seriously not because there is some identifiable individual behind them but because of their content. For all we know, there might not be a single academic out there who believes, e.g. in the truth and usefulness of existentialism, but that does not seem to deter academics to ponder ceaselessly over existentialist claims. Thus, since anonymity only invalidates a conspiracy theory’s social capital, it does not seem to function as a deal-breaker for the inclusion in the societal dialogue.

Irrelevance as deal-breaker

Our next potential deal-breaker, irrelevance, poses some methodological challenges. Unlike fraud and anonymity, which we can gauge both theoretically and pre-theoretically, it is exceedingly difficult to say when a conspiracy theory is relevant for a certain innovation process or product. We might have clear pre-theoretical intuitions of what it means to commit fraud discursively (the fake video was a fairly unproblematic example) and the same holds for anonymity, but the relevance, or impact, of a conspiracy theory is difficult to ascertain. At the same time, it is obvious that not all conspiracy theories are equally relevant for any one innovation process or product. Conspiracy theories according to which the Covid-19 pandemic is a cover-up for deploying the 5G technology is (intuitively speaking) closer to the 5G topic than, say, the theory according to which the Earth is flat or even more so for the theory according to which Lady Diana was assassinated. Similarly, some conspiracy theories are outdated (e.g. the theory according to which the addition of fluoride to public water is a Communist plot). The inclusion of such groups into responsible innovation dialogues pertaining to a specific technology seems pointless. Indeed, the case can be made that it is the duty of the responsible innovation scholar precisely to focus on theories that are urgent in a certain socio-technical context rather than any dormant or irrelevant theories that will have little to claim about the technology under discussion. Now imagine this.

Imagine you are a responsible innovation scholar in possession of an infallible lie detector that can help you determine, for each science conspiracist you invite, which conspiracy theory they espouse. For the purpose of the present scenario, we can assume that you also have a measure of relevance that has been agreed-upon by everyone and is also infallible (we will get back to this point). With these two instruments, you can pinpoint exactly which conspiracist beliefs you can admit to your responsible innovation exercise and exclude the irrelevant ones. Let us imagine that some conspiracist members of the Society of Animal Lovers insist on participating in your responsible innovation exercise on 5G technology. Clearly, the Society’s members will not pass this test simply because the technology responsible for animal slaughtering and 5G technology have little to do with each other – a lack of connection with which they agree. However, we might imagine that the two technologies are in fact connected in some other way, e.g. the conspirators accused by the Society are in fact the same group of individuals that are now promoting and distributing 5G technology or the 5G technology is in fact going to lead to more animal slaughtering. Yet such conclusions can only be reached during the dialogue when each party fully expands on their worldview. It seems then that relevance is something one arrives at partly during a deliberative and inclusive dialogue process rather than something one can formalize in a clear-cut entry rule.

One might be inclined to respond that we haven’t defined our ‘entrance rule’ correctly – that the relationship between the technologies is not the correct way to exclude or include people. But the entrance rule and the actual relationship can both be modified in many ways so as to not correspond. Every time the relevance-based entrance is enforced, an imaginary conspiracy theory can be devised that is at the same time relevant for the technology in question and deemed irrelevant by the entrance rule. In the end, what imaginary scenarios of this kind show is that relevance is not only in the eye of the beholder – something we took as a point of departure, but also that relevance might be intractable during a screening process. It is hard to establish whether the Society’s members in the original scenario are relevant to the innovation exercise or not before the exercise takes place. The problem is, however, that if relevance can only be established intra muros, then we might have opened the door for just about anything since, with enough imagination, various apparently unrelated technologies and conspiracy theories can turn out to be related after all in ways that can only be discovered during an inclusive exercise. If everything can be relevant, nothing is irrelevant – at least not prior to discussion. The problem seems to be alleviated by the assumption that stakeholders should be allowed to select themselves by claiming the right to participation. Since only stakeholders who are willing to participate are eventually considered, the pool of possible forms of relevance seems to be reduced in this way.

Unresponsiveness and unfalsifiability as a deal-breakers

Being responsive to the alternative demands and alternative narratives of other stakeholders has always been placed at the core of the responsible innovation movement (Owen, Macnaghten, and Stilgoe Citation2012; Blok Citation2019). In a different field, political philosophy, the claim has been made that the core value that unites various forms of liberal societies throughout history is their acceptance, in one form or another, of the principles that the other must be heard, regardless of whether they are in the wrong and perhaps especially when they are in the wrong (Hampshire Citation2018; Crowder Citation2021, 51). In various RRI models and theories proposed in the past decades, responsiveness is always one of the core requirements for responsibility, meaning that we can only speak of responsible innovation to the extent that the parties involved are responsive to each other’s demands (Stilgoe, Owen, and Macnaghten Citation2013).

But in our case, we can expect conspiracists to be particularly resilient when faced with counter-evidence and thus to be unresponsive. Conspiracy theories have been duly characterized as ‘remarkably wily and resilient epistemic creatures’ (Basham Citation2006). Obviously, some conspiracists can convinced out of their theory but it should be noted that this does not happen because their theory is refuted. Logically speaking, a conspiracy theory cannot be refuted to the extent that it appears as an existential statement, e.g. ‘There is x such that A is true of x’ (‘There is a group of people such that they conspire to do this and that’). Whatever piece of evidence is brought as a counterargument against the implausibility of such a statement, it can never be fully proven false for there is nothing that the claim excludes. In Popper’s terminology, we would say that the class of potential falsifiers is empty (Popper Citation1992, 66). But surely unresponsiveness can act as a deal-breaker from both sides: either because the conspiracists are not responsive to alternative claims or because non-conspiracist stakeholders are not responsive to conspiracy theories. Whatever its form, extreme forms of resilience seem to go against the mentioned ideal of responsiveness in responsible innovation. Is this a two-way deal-breaker for the inclusion of conspiracism in the innovation process?

This sounds plausible, but we can imagine a slightly modified version of the previous scenario. This time, the responsible innovation scholar has access to the world’s first infallible responsiveness detector – a device that, we assume, cannot fail in detecting the degree to which someone is responsive to the demands and narratives of others. As in the previous version, we do not need to worry about how the algorithm of this mechanism works (because the possibility of determining whether someone is responsive is not at issue). And once more, we set up the device to function as a screening mechanism before entrance: only those stakeholders that are ‘responsive enough’ are allowed to join. The only difference is that in this case the responsible innovation scholar might herself need to undergo screening. As explained, responsiveness to the claims of science conspiracists is as important as the responsiveness of science conspiracists. As a result, if someone has absolutely no intention of giving up listening to the other side and to the ethical demands of others, they will not be included in the responsible innovation exercise. The members of the Society of Animal Lovers, as it happens, are excluded once more as the result of this process. Are we justified in using the detector and excluding the unresponsive ones in this way?

The problem appears, once more, in the form of a discrepancy between the ideal of responsiveness built into the detection mechanism (whatever the ideal may be, and however it might be codified into an algorithm) and its real-life counterpart. Such a criterion of responsiveness works fine if it is shared by those who screen and those who are screened, but responsiveness just like relevance seems to allow a plurality of forms. The science conspiracist detected as such by our imaginary algorithm, might still be responsive in a way that deviates from our ideals of responsibility – e.g. she might not be responsive during the responsible innovation exercise for fear of losing face but she might be (both during and after the responsible innovation exercises) in ways that we cannot understand given that we do not share the same definition of responsiveness. Furthermore, and this is important for someone who holds beliefs that are unfalsifiable, the ‘response’ of both the science conspiracist and that of the responsible innovation scholar might not be detectable or even intentional. In this respect, responsiveness behaves rather like a virtue than like a sine qua non condition for acceding to the democratic arena of public dialogue. We can see responsiveness as a virtue that corresponds with a certain broad and usually underdefined range of behavior (phronesis) but that behavior might not be describable in highly general terms but only ‘locally’ relative to societal and cultural context (Blok, Gremmen, and Wesselink Citation2015). The virtue of responsiveness has always been brought forth as an important pillar of responsible innovation (Owen, Macnaghten, and Stilgoe Citation2012; Asveld Citation2017; Blok Citation2019), but it might not be the sort of thing that we can codify as a (binary) rule for entrance.

Conclusion

The field of responsible innovation is very much built around the idea of inclusion as a form of democratizing science and innovation and, ultimately, as a road towards a more just social contract between science and society. In this process, responsible innovation scholars have been painstakingly aware of the need to tackle the group of ‘unincluded includables’ – stakeholders that have the right to be included in the innovation process but are not typically included. Indeed, the present journal was born out of the observation that ‘people design technologies to advance certain interests and constrain others’ and that this selection of interests needs to ‘involve a broader array of stakeholders and laypersons in decision-making about the value of [innovation]’ (Guston et al. Citation2014, 3). But unincluded includables such as civil society organizations and political organizations are easy repairs because, it is assumed, these stakeholders are ultimately part of the narrative under discussion. In this paper, we wanted to explore the group of unincluded unincludables – stakeholders that seem to have forfeited their right to participate in the decision-making process. The four ways in which conspiracists can be accused to have forfeited their right are (i) their use of fraudulent discursive and non-discursive tactics, (ii) their anonymity, (iii) the irrelevance of their theories to the science discussed, and (iv) their unresponsiveness or unfalsifiability to other stakeholders and these other stakeholders’ responsiveness to them. We made the case that these are genuine norms to be pursued in the responsible innovation process but that none is ultimately capable of bringing home the conclusion that science conspiracists do not belong in the responsible innovation arena. The conclusion to be drawn from our thought experiments is that, barring obvious cases of criminal or immoral conduct – as Galston puts it, referring to the ancient tradition of human sacrifice, ‘No free exercise for Aztecs!’ (Galston Citation2002) – the case for exclusion of conspiracists from the democratic arena of science and technology talk is surprisingly difficult. Of course, the argument of the paper was cast in fairly general terms. Many specific distinctions can be made between science conspiracists and between responsible innovation exercises such that some of the general considerations might hold while others might be invalidated. Yet as the first study of its kind in the field of responsible innovation, it seems appropriate that an initial exploration (of the line between ‘includables’ and ‘unincludables’) should start by abstracting away from the many details that can be taken into consideration later on.

By way of conclusion, we would like to note that the relationship between science conspiracism and responsible innovation, two phenomena that have had increased traction and visibility in the past decade, seems to depend at a fundamental level on two factors. First, if responsible innovation scholars respond within what Weber called the ‘ethics of principled conviction’ (Weber Citation1994), then of course the conspiracist can be easily set aside as someone who did not check the evidence critically enough, someone who commits and perhaps is the victim of fallacious reasoning. Recent work on conspiracism, however, has suggested that not all academics have the moral responsibility to debunk (science) conspiracy theories (Harambam Citation2021). On the other hand, if responsible innovation scholars respond with what Weber called the ‘ethics of responsibility’ (Citation1994), then the conspiracist can be understood as part of a culture that works in much the same way the rest of society does with some claims being met with disbelief, others being adopted almost immediately and uncritically, others being somewhere in the middle. Behind these choices there might be an even deeper philosophical commitment. If the field is motivated by a monist philosophy of value (directed at one unitary good) then the exclusion can easily be justified based on standards recognized as rational. There are plenty of rational arguments to exclude science conspiracists from science. If, however, the field is motivated by value pluralism (Kaul and Salvatore Citation2020; Galston Citation2005; Crowder Citation2021) then conspiracists make a strong claim for their right to be included in the dialogue on science and technology.

Second, the relationship between conspiracism and responsible innovation will also hinge on the goal we set for the field of responsible innovation. If we believe that the goal is to reach a consensus on value judgments or ‘value-alignment’, then the phenomenon of conspiracism – poses a serious problem because, as we have argued in the beginning, its identity is subversive to the mainstream narrative. For them, by definition, the game is rigged. But if we see the goal of responsible innovation and inclusion more specifically as that of creating a space for ethical reflection and responsiveness in order to reveal our own blind spots and immunized assumptions (Blok Citation2019) or if we understand it as the creation of a platform for ‘agonistic respect’ regardless of whether consensus is reached or not (Popa, Blok, and Wesselink Citation2020), then all of a sudden conspiracism appears to be a rare metal that we can employ to improve our responsible innovation process. In this sense, we can note together with Coady that the negative connotation of the term ‘conspiracist’ (mentioned in Section ‘The peculiar challenge of science conspiracism’) is but a signal of a deeper phenomenon, that of derision and thus closedness towards science conspiracists or perhaps conspiracists more generally:

This [negative] usage serves to intimidate and silence [conspiracists], whether or not their beliefs are justified, and whether or not they are true. Hence, this usage makes it less likely that such conspiracies will be exposed (or exposed in a timely manner), and more likely that the perpetrators of conspiracies will get away with them. Hence, there is reason to think that the fact that these expressions have pejorative connotations causes our society to be less open than it otherwise would be. There is a sad irony in the fact that Popper, the author of The Open Society and Its Enemies (1962), should have started a practice (i.e. conspiracy baiting) which has made it easier for conspiracy to thrive at the expense of openness. (Coady Citation2018, 182)

The openness mentioned by Coady is precisely the value sought in many studies of responsible innovation and might even be seen as a synonym for the ideal of inclusiveness with which this paper started. Ultimately, then, the relationship between the field of responsible innovation and conspiracism is a litmus test for how open we are prepared to be when we fly the flag of inclusiveness.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Eugen Octav Popa

Dr Eugen Popa is a communication scholar who works in the field of STS (science and technology studies) and RRI (responsible research and innovation). He works as postdoctoral researcher at the Department of Communication, Philosophy and Technology where he is involved in various European and Dutch projects relating to stakeholder interaction in innovation. His work has been published in Informal Logic, Science and Public Policy, Public Understanding of Science, Philosophy and Technology, Cogency. He is the winner of the 2016 J. A. Blair prize for the study of argumentation. Eugen Popa obtained his PhD in 2015 with a thesis on argumentative interactions in science (Popa 2016b) and published papers on the reasonableness of argumentative interactions (Popa 2016a) , discussion structures for reconstructing scientific debates (Popa, Blok, and Wesselink 2020b), friction between stakeholders in innovation projects (Popa, Blok, and Wesselink 2020c), technological conflict (Popa, Blok, and Wesselink 2020a). He has been involved as a postdoctoral researcher in several Horizon 2020 such as RRI Tools, RiConfigure, NewHoRRIzon and since March 2021 in the RRIstart project. He has also worked with the Dutch Health Council in studying the interaction between scientists and policy makers in cases of public controversy.

Highlighted publications:

Popa, E. O. 2016a. “Criticism Without Fundamental Principles.” Informal Logic 36 (2): 192–216.

Popa, E. O. 2016b. “Tought Experiments in Academic Communication.” Doctoral diss., University of Amsterdam. Amsterdam. https://pure.uva.nl/ws/files/2759368/176326_proefschrift_eugen_popa_UBA_complete.pdf.

Popa, E. O., V. Blok, and R. Wesselink. 2020a. “An Agonistic Approach to Technological Conflict.” Philosophy & Technology, 1–21. doi:10.1007/s13347-020-00430-7.

Popa, E. O., V. Blok, and R. Wesselink. 2020b. “Discussion Structures as Tools for Public Deliberation.” Public Understanding of Science 29 (1): 76–93. doi:10.1177/0963662519880675.

Popa, E. O., V. Blok, and R. Wesselink. 2020c. “A Processual Approach to Friction in Quadruple Helix Collaborations.” Science and Public Policy. doi:10.1093/scipol/scaa054.

Popa, E. O. 2022. “On the rational resolution of (deep) disagreements.” Synthese 200: 270. https://doi.org/10.1007/s11229-022-03753-4.

Vincent Blok

Dr Vincent Blok MBA is an Associate Professor in Sustainable Entrepreneurship, Business Ethics and responsible innovation at the Management Studies Chair Group, and Associate Professor in Philosophy of Management, Technology & Innovation at the Philosophy Chair Group, Wageningen University. From 2002 to 2006, Blok held various management functions in the health care sector. In 2006, he became director of the Louis Bolk Institute, an international research institute in the field of organic and sustainable agriculture, nutrition and health care. In 2005 he received his PhD degree in philosophy at Leiden University with a specialization in philosophy of technology. Together with 10 PhDs and 1 Post-doc, Blok pursues three lines of research – Business Ethical Issues in Sustainable Entrepreneurship, Philosophy of Management, Technology & Innovation, and Responsible Innovation in the private sector in several (European) research projects. Blok's work appeared amongst others in Journal of Business Ethics, Journal of Cleaner Production, Journal of Agricultural and Environmental Ethics, Journal of the British Society for Phenomenology, Studia Phasenomenologica, and Journal of Responsible Innovation. See www.vincentblok.nl for his recent research projects.

Notes

References

  • Asveld, Lotte, ed. 2017. Responsible Innovation 3: A European Agenda? Cham: Springer.
  • Barkun, Michael. 2003. A Culture of Conspiracy: Apocalyptic Visions in Contemporary America, Comparative Studies in Religion and Society. Berkeley: University of California Press.
  • Basham, L. 2006. “Afterthoughts on Conspiracy Theory: Resilience and Ubiquity.” In Conspiracy Theories: The Philosophical Debate, edited by David Coady, 133–139. Aldershot: Ashgate.
  • Bellemare, A., J. Ho, and K. Nicholson. 2020. “Some Canadians Who Received Unsolicited Copy of Epoch Times Upset by Claim that China Was Behind Virus.” CBC NEWS, May 1. https://www.cbc.ca/news/canada/epoch-times-coronavirus-bioweapon-1.5548217.
  • Bergengruen, V. 2021. “How ‘America's Frontline Doctors’ Sold Access to Bogus COVID-19 Treatments – and Left Patients in the Lurch.” Time.
  • Blok, V. 2014. “The Metaphysics of Collaboration: Identity, Unity and Difference in Cross-Sector Partnerships for Sustainable Development.” Philosophy of Management 13 (2): 53–74. doi:10.5840/pom201413211.
  • Blok, V. 2019. “From Participation to Interruption: Toward an Ethic of Stakeholder Engagement, Participation, and Partnership in Corporate Social Responsibility and Responsible Innovation.” In International Handbook of Responsible Innovation, edited by R. von Schomberg and J. Hankins, 243–259. Cheltenham: Edward Elgar.
  • Blok, V., B. Gremmen, and R. Wesselink. 2015. “Dealing with the Wicked Problem of Sustainability: The Role of Individual Virtuous Competence.” Business & Professional Ethics Journal 34 (3): 297–327.
  • Boudry, Maarten, Fabio Paglieri, and Massimo Pigliucci. 2015. “The Fake, the Flimsy, and the Fallacious: Demarcating Arguments in Real Life.” Argumentation 29 (4): 431–456. doi:10.1007/s10503-015-9359-1.
  • Brown, James Robert. 2011. The Laboratory of the Mind: Thought Experiments in the Natural Sciences. Houndmills, Basingstoke: Routledge.
  • Byford, Jovan. 2011. Conspiracy Theories: A Critical Introduction. Houndmills: Palgrave Macmillan.
  • Cerulus, L. 2020. “5G Arsonists Turn Up in Continental Europe.” Politico. Accessed October 2020. https://www.politico.com/news/2020/04/26/5g-mast-torchers-turn-up-in-continental-europe-210736.
  • Coady, David. 2006. “An Introduction to the Philosophical Debate About Conspiracy Theories.” In Conspiracy Theories: The Philosophical Debate, edited by David Coady, 1–13. Aldershot: Ashgate.
  • Coady, David. 2018. “Anti-Rumor Campaigns and Conspiracy-Baiting as Propaganda.” In Taking Conspiracy Theories Seriously, edited by Matthew R. X. Dentith, 171–189. Lanham: Rowman & Littlefield International.
  • Collingridge, David. 1982. The Social Control of Technology. Cambridge: Cambridge University Press.
  • Crowder, George. 2021. The Problem of Value Pluralism: Isaiah Berlin and Beyond. New York: Routledge.
  • Cuppen, Eefje. 2012. “Diversity and Constructive Conflict in Stakeholder Dialogue: Considerations for Design and Methods.” Policy Sciences 45 (1): 23–46. doi:10.1007/s11077-011-9141-7.
  • de Bakker, Erik, Carolien de Lauwere, Anne-Charlotte Hoes, and Volkert Beekman. 2014. “Responsible Research and Innovation in Miniature: Information Asymmetries Hindering a More Inclusive ‘Nanofood’ Development.” Science and Public Policy 41 (3): 294–305. doi:10.1093/scipol/scu033.
  • Dentith, Matthew R. X. 2014. The Philosophy of Conspiracy Theories. Houndmills: Palgrave Macmillan.
  • Dentith, Matthew R. X. 2018. “The Problem of Conspiracism.” Argumenta 3 (2): 327–343.
  • Dunlap, R. E., and A. M. McCright. 2010. “Climate Change Denial: Sources, Actors and Strategies.” In Routledge Handbook of Climate Change and Society, edited by C. Lever-Tracy, 240–261. London: Routledge.
  • Edqvist, L., and K. Pedersen. 2001. “Antimicrobials as Growth Promoters: Resistance to Common Sense.” In Late Lessons from Early Warnings: The Precautionary Principle 1896–2000, edited by D. Gee, 93–101. Copenhagen: EEA.
  • Felt, Ulrike, and Maximilian Fochler. 2010. “Machineries for Making Publics: Inscribing and de-Scribing Publics in Public Engagement.” Minerva 48 (3): 219–238.
  • Frenkel, S., B. Decker, and D. Alba. 2020. “How the ‘Plandemic’ Movie and Its Falsehoods Spread Widely Online.” New York Times. Accessed September 23, 2021. https://www.nytimes.com/2020/05/20/technology/plandemic-movie-youtube-facebook-coronavirus.html.
  • Friedman, Batya, and David Hendry. 2019. Value Sensitive Design: Shaping Technology with Moral Imagination. Cambridge: The MIT Press.
  • Funke, D. 2020. “Fact-Checking ‘Plandemic’: A Documentary Full of False Conspiracy Theories about the Coronavirus.” Politifact. Accessed October 2020. https://www.politifact.com/article/2020/may/08/fact-checking-plandemic-documentary-full-false-con/.
  • Galston, William A. 2002. Liberal Pluralism: The Implications of Value Pluralism for Political Theory and Practice. Cambridge: Cambridge University Press.
  • Galston, William A. 2005. The Practice of Liberal Pluralism. Cambridge: Cambridge University Press.
  • Gee, D., ed. 2001. Late Lessons from Early Warnings: The Precautionary Principle 1896–2000. Copenhagen: EEA.
  • Genus, Audley, and Marfuga Iskandarova. 2018. “Responsible Innovation: Its Institutionalisation and a Critique.” Technological Forecasting and Social Change 128: 1–9. doi:10.1016/j.techfore.2017.09.029.
  • Genus, Audley, and Andy Stirling. 2018. “Collingridge and the Dilemma of Control: Towards Responsible and Accountable Innovation.” Research Policy 47 (1): 61–69. doi:10.1016/j.respol.2017.09.012.
  • Goodin, Robert E., and John S. Dryzek. 2006. “Deliberative Impacts: The Macro-Political Uptake of Mini-Publics.” Politics & Society 34 (2): 219–244. doi:10.1177/0032329206288152.
  • Goodman, J., and F. Carmichael. 2020. “Coronavirus: Bill Gates ‘Microchip’ Conspiracy Theory and Other Vaccine Claims Fact-Checked.” BBC News. Accessed January 2021. https://www.bbc.com/news/52847648.
  • Goyer, Robert A. 1990. “Lead Toxicity: From Overt to Subclinical to Subtle Health Effects.” Environmental Health Perspectives 86: 177–181.
  • Groves, Christopher. 2009. “Future Ethics: Risk, Care and Non-Reciprocal Responsibility.” Journal of Global Ethics 5 (1): 17–31.
  • Grunwald, Armin. 2009. “Technology Assessment: Concepts and Methods.” In Philosophy of Technology and Engineering Sciences, edited by Anthonie Meijers, 1103–1146. Amsterdam: North-Holland.
  • Grunwald, Armin. 2011. Responsible Innovation: Bringing Together Technology Assessment, Applied Ethics, and STS Research, Enterprise and Work Innovation Studies 7: 9–31.
  • Guston, David H., Erik Fisher, Armin Grunwald, Richard Owen, Tsjalling Swierstra, and Simone van der Burg. 2014. “Responsible Innovation: Motivations for a new Journal.” Journal of Responsible Innovation 1 (1): 1–8. doi:10.1080/23299460.2014.885175.
  • Hampshire, Stuart. 2018. Justice Is Conflict. Princeton, NJ: Princeton University Press.
  • Hansson, Sven Ove. 2017. The Ethics of Technology: Methods and Approaches, Philosophy, Technology and Society. London: Rowman & Littlefield International.
  • Harambam, Jaron. 2021. “Against Modernist Illusions: Why We Need More Democratic and Constructivist Alternatives to Debunking Conspiracy Theories.” Journal for Cultural Research 25 (1): 104–122. doi:10.1080/14797585.2021.1886424.
  • Harremoës, P., D. Gee, M. MacGarvin, A. Stirling, J. Keys, Brian Wynne, and S. Vaz, eds. 2013. Late Lessons from Early Warnings: Science, Precaution, Innovation. Vol. 1. Copenhagen: European Environmental Agency.
  • Hellstrom, Tomas. 2003. “Systemic Innovation and Risk: Technology Assessment and the Challenge of Responsible Innovation.” Technology in Society 25 (3): 369–384.
  • Irwin, Alan, Torben Elgaard Jensen, and Kevin E. Jones. 2012. “The Good, the Bad and the Perfect: Criticizing Engagement Practice.” Social Studies of Science 43 (1): 118–135. doi:10.1177/0306312712462461.
  • Jasanoff, Sheila. 2005. Designs on Nature: Science and Democracy in Europe and the United States. Princeton, NJ: Princeton University Press.
  • Kalichman, Seth C. 2014. “The Psychology of AIDS Denialism.” European Psychologist 19 (1): 13–22. doi:10.1027/1016-9040/a000175.
  • Kaul, Volker, and Ingrid Salvatore. 2020. What is Pluralism? Abingdon: Routledge.
  • Keeley, B. 2006. “Of Conspiracy Theories.” In Conspiracy Theories: The Philosophical Debate, edited by David Coady, 45–61. Aldershot: Ashgate.
  • Koops, B., I. Oosterlaken, R. Romijn, T. Swierstra, and Jeroen van den Hoven, eds. 2015. Responsible Innovation 2: Concepts, Approaches and Applications. Cham: Springer International Publishing.
  • Kovarik, W. 2005. “Ethyl-Leaded Gasoline: How a Classic Occupational Disease Became an International Public Health Disaster.” International Journal of Occupational and Environmental Health 11 (4): 384–397. doi:10.1179/oeh.2005.11.4.384.
  • Lipsitch, Marc, Randall S. Singer, and Bruce R. Levin. 2002. “Antibiotics in Agriculture: When Is It Time to Close the Barn Door?” Proceedings of the National Academy of Sciences 99 (9): 5752. doi:10.1073/pnas.092142499.
  • Marques, Mathew D., John R. Kerr, Matt N. Williams, Mathew Ling, and Jim McLennan. 2021. “Associations Between Conspiracism and the Rejection of Scientific Innovations.” Public Understanding of Science. doi:10.1177/09636625211007013.
  • Menkes, David B, and J. Paul Fawcett. 1997. “Too Easily Lead? Health Effects of Gasoline Additives.” Environmental Health Perspectives 105 (3): 270–273.
  • Needleman, H., and D. Gee. 2013. “Lead in Petrol ‘Makes the Mind Give Way’.” In Late Lessons from Early Warnings: Science, Precaution, Innovation, edited by EEA, 46–79. Copenhagen: European Environmental Agency.
  • Nooteboom, Bart, Wim Van Haverbeke, Geert Duysters, Victor Gilsing, and Ad Van den Oord. 2007. “Optimal Cognitive Distance and Absorptive Capacity.” Research Policy 36 (7): 1016–1034.
  • Offit, Paul A. 2017. Pandora's Lab: Seven Stories of Science Gone Wrong. Washington, DC: National Geographic.
  • Owen, R., P. Macnaghten, and J. Stilgoe. 2012. “Responsible Research and Innovation: From Science in Society to Science for Society, with Society.” Science and Public Policy 39 (6): 751–760.
  • Owen, R., J. Stilgoe, P. Macnaghten, M. Gorman, E. Fisher, and David H. Guston. 2013. “A Framework for Responsible Innovation.” In Responsible Innovation, edited by R. Owen, John Bessant, and Maggy Heintz, 27–51. London: John Wiley & Sons.
  • Parker, Martin. 2000. “Human Science as Conspiracy Theory.” The Sociological Review 48 (2_suppl): 191–207.
  • Popa, E. O. 2016. “Tought Experiments in Academic Communication.” Doctoral diss., University of Amsterdam.
  • Popa, E. O., V. Blok, and R. Wesselink. 2020. “An Agonistic Approach to Technological Conflict.” Philosophy & Technology, 1–21. doi:10.1007/s13347-020-00430-7.
  • Popper, Karl R. 1992. The Logic of Scientific Discovery. London: Routledge.
  • Pummerer, Lotte, Robert Böhm, Lau Lilleholt, Kevin Winter, Ingo Zettler, and Kai Sassenberg. 2020. “Conspiracy Theories and Their Societal Effects During the COVID-19 Pandemic.” Social Psychological and Personality Science. doi:10.1177/19485506211000217.
  • Rip, Arie, and Haico Te Kulve. 2008. “Constructive Technology Assessment and Socio-Technical Scenarios.” In Presenting Futures, edited by E. Fischer, C. Selin, and J. Wetmore, 49–70. Springer.
  • Selin, Cynthia, Kelly Campbell Rawlings, Kathryn de Ridder-Vignone, Jathan Sadowski, Carlo Altamirano Allende, Gretchen Gano, Sarah R Davies, and David H. Guston. 2017. “Experiments in Engagement: Designing Public Engagement with Science and Technology for Capacity Building.” Public Understanding of Science 26 (6): 634–649.
  • Stilgoe, J., Simon J. Lock, and James Wilsdon. 2014. “Why Should We Promote Public Engagement with Science?” Public Understanding of Science 23 (1): 4–15. doi:10.1177/0963662513518154.
  • Stilgoe, J., R. Owen, and P. Macnaghten. 2013. “Developing a Framework for Responsible Innovation.” Research Policy 42 (9): 1568–1580.
  • Stuart, Michael T., Y. Fehige, and James Robert Brown, eds. 2018. The Routledge Companion to Thought Experiments, Routledge Philosophy Companions. London/New York: Routledge/Taylor & Francis Group.
  • Thomson, Judith Jarvis. 1971. “A Defense of Abortion.” Philosophy & Public Affairs 1 (1): 47–66.
  • van den Hoven, Jeroen, N. Doorn, T. Swierstra, B. Koops, and Henny Romijn, eds. 2014. Responsible Innovation 1: Innnovative Solutions for Global Issues. New York: Springer Science + Business Media.
  • van Eemeren, F. H., Bart Garssen, E. C. W. Krabbe, Arnolda Francisca Snoeck Henkemans, Bart Verheij, and Jean Hubert Martin Wagemans. 2014. Handbook of Argumentation Theory. Dordrecht: Springer Reference.
  • Weber, Max. 1994. Weber: Political Writings, Cambridge Texts in the History of Political Thought. Cambridge: Cambridge University Press.
  • Wilsdon, James, and Rebecca Willis. 2004. See-Through Science: Why Public Engagement Needs to Move Upstream. London: Demos.
  • Wynne, B. 2007. “Public Participation in Science and Technology: Performing and Obscuring a Political–Conceptual Category Mistake.” East Asian Science, Technology and Society: An International Journal 1 (1): 99–110.