6,475
Views
22
CrossRef citations to date
0
Altmetric
Research articles

The roles of ethics in gene drive research and governance

Pages S159-S179 | Received 10 May 2016, Accepted 26 Jul 2017, Published online: 03 Jan 2018

ABSTRACT

Ethics research queries the norms and values that shape the goals and justification for gene drive projects, and that might lead to issue or opposition to such projects. A framework for organizing ethics research is offered. In addition to basic research ethics and risk assessment, gene drives will give rise to questions about the fiduciary responsibilities of scientists, democratizing technology, and the links between epistemology and social power relations. A final category of ethical issues covers the way in which research on norms and values is organized, funded and integrated into other aspects of a gene drive project.

Introduction

Ethics is peculiar in that everyone is expected have ethics, to be ethical, and these expectations are not based on any specific training or expertise. At the same time, ethics is a discipline of inquiry and scholarship dedicated to understanding normativity: what is it that makes one action or state of affairs good, responsible or beneficial, in contrast to others that are bad, irresponsible or harmful? As a form of scholarship, ethics can have a very general level of abstraction or it can be applied to very specific activities or proposals. The National Academies of Sciences, Engineering and Medicine report Gene Drives on the Horizon, released in June of 2016, devotes a significant number of pages to topics that can be characterized under the broad heading of ethics in this applied sense (see reprint of executive summary in this special issue). The report contains a chapter entitled ‘Charting Human Values’ that addresses key ethical topics in an unusually straightforward way. It discusses debates over genetic engineering, reviewing how outcomes are classified as beneficial or harmful and summarizing environmentally based values that might not generally be included in a human-focused (or anthropocentric) ethical assessment. This chapter is followed by discussions of how testing and risk analysis might be used to understand and either avoid or mitigate the adverse outcomes identified in the ‘Charting Human Values’ chapter. The report then turns to two additional topics that are often classified as forms of ethics: the role of and methods for broader public engagement in the development and implementation of gene drives, and the rationales and means for governance, both through formal regulatory processes and through mechanisms that structure incentives or foster cooperative measures to manage risks, including the potential for unjust or unfair outcomes (NASEM Citation2016).

In addition to materials drawn from ethics-oriented research on earlier emerging technology, Gene Drives on the Horizon builds upon a small but influential literature that has addressed gene drives explicitly. Lavery, Harrington, and Scott (Citation2008) provided an initial overview of ethical issues in using gene drives to promote transgenic mosquitoes for control of malaria and dengue. This has been endorsed in papers providing overview discussions on gene drives (Marshall and Taylor Citation2009). Resnik (Citation2014) published a review discussing key issues for human subjects. However, the discussion of substantive ethical issues is only now beginning to open up. Opinions on and analysis of gene drive ethics will almost certainly proliferate during the coming decade, and the venues for this discussion will include both peer-reviewed literature and public forums open to all comers. While the NASEM (Citation2016) report and this early literature identify crucial topics for anticipating the ethical issues that will accompany research, development and possible deployment of gene drives, this paper will suggest an alternative framework that both sharpens the focus of ethical deliberations in key areas and expands the scope of ethical research in others.

As mentioned above, ethics makes appearances well beyond the identification of human values (Chapter 4) in the NASEM (Citation2016) report because many of the reasons for engagement of communities, stakeholders and publics (Chapter 7) are ethical in nature. In addition, the very idea of governance (Chapter 8) depends on conceptualizing social goals and evaluating the legitimacy and moral justification for alternative governance institutions. As the report itself notes, norms for science and research even work their way into the technical section of the report (NASEM Citation2016, 67). However the report is not entirely clear about the roles for ethics beyond its discussion of human values in Chapter 4. For example, the discussion of non-anthropocentric values in Chapter 4 has profound implications for risk assessment (Chapter 6), a topic that is still discussed in relatively technical terms in the report. Yet if there are outcomes that must be avoided for non-anthropocentric reasons, they must be included among the hazards that risk assessment is intended to analyze. Although scholars and others will certainly comment upon the ethical dimensions of gene drives from many different vantage points, the discussion that follows presumes that ethics will be a structured and planned research activity that will accompany other components of gene drive science. Here, ethics is portrayed as one structured enquiry to be undertaken within organized and transdisciplinary gene drive research. Nevertheless, the framework does not presume that this activity will be conducted solely by ethics experts. If contemporary ethics of technology has learned anything, it is that ethical wisdom can come from a tremendously wide array of sources (Von Schomberg Citation2007).

This paper does not provide a summary or discussion gene drives, the science behind them or their potential applications. The framework that is described below has been developed in response to a series of emerging technologies including nuclear power, agricultural biotechnology, adult-cell mammalian cloning, food nanotechnology and genetically engineered mosquitoes. It would be applicable (with minor modifications) to many emerging technologies. As such, a detailed discussion of gene drives is not strictly necessary for presentation of the framework itself. Using the framework in conjunction with gene drive research would clearly require familiarity with the underlying science and the uses for which it is being proposed (see the introductory article of this special issue by Delborne et al. for a descriptive summary of gene drive science).

Approach

The framework for ethics-oriented research that is developed below is structured in terms of six domains or areas for future studies. They are: (1) Standard Research Ethics; (2) Identification and Interpretation of Risk; (3) Fiduciary Responsibility; (4) Democratizing Technology; (5) Epistemology and Power and (6) Procedural Ethics, a cross-cutting domain. These areas are characterized as separate domains primarily for purposes of exposition. They are not, in most respects, truly separable or distinct in terms of the issues that they raise or the ethical responsibilities that might be entailed. However, there are markedly distinct scholarly literatures for each of these domains, with comparatively little cross-citation. The domains may thus reflect research networks possessing significant capability for addressing one or more of the topics for gene drive ethics, while excluding findings or research tools that are prominent in other networks. For the purposes of this paper, this ‘research networks’ hypothesis is put forward to clarify the sense in which a multiple-domains approach to gene drive ethics could represent an advance over that of the NASEM (Citation2016) report and previous studies. Although suggestive, this hypothesis has not yet been subjected to an empirical analysis.

Although a few substantive ethical issues are discussed by way of example and illustration, the paper makes no attempt to survey the full scope of the literature in each of the six domains. The rationale for this is twofold. First, even a representative survey of the literature could not be accomplished within the scope of a single paper. Second, aspects of the literature are highly repetitive. Recent articles on gene editing tools discuss non-human applications of previous genetic transformation technologies (Charo and Greely Citation2015; Flotte Citation2016), yet do not cite the extensive literature discussing the ethical dimensions of agricultural, food and environmental applications of gene transfer (for an overview, see Thompson and Hannah Citation2008). While the framework that is offered below provides a theoretical locus where issues such as, for example, the nature and extent of humanity’s ethical responsibility to nature or to preserving biodiversity might be discussed, this paper does not go on to provide an entrée to or summary of those questions. Identifying and integrating a far-flung literature on the ethical analysis of gene technology will prove to be a significant task for the gene drive community, and the NASEM (Citation2016) report goes farther in specifying some of the further questions that will need ethics research than this paper.

In addition, the paper does not put forward substantive arguments about whether gene drives are or are not ethically acceptable, nor does it survey the opinions and evaluations about gene drives that might be ventured as these technologies become more widely known. It does not attempt a speculative review of arguments about the ethical acceptability of gene drives that might be proffered on philosophical grounds. The NASEM (Citation2016) report does provide an initial summary of some key topics, and a review of these contributions would be repetitive. To the extent that topics mentioned in the report are discussed below, they are used to exemplify the kinds of ethical concern that bear upon the larger question of ethical acceptability for emerging biological technologies. The analysis that follows should be read as an attempt to help gene drive research teams recognize and give advance consideration to the ethical questions that might arise on connection with their respective projects by sketching domains or categories in which ethical research should be organized. It is a tool for early ethics assessment (Buckley, Thompson, and Whyte Citation2017). In short, the framework developed in this paper describes the type of questions that might be asked in studying the ethics of gene drives; it does not pretend to answer them.

First domain: standard research ethics

Working scientists have come to expect a review of plans and protocols associated with institutional review boards and requirements for disclosing potential conflicts of interest. Research on gene drives will certainly not be exempt from these basic ethical standards, and a very brief review follows. While not philosophically controversial, procedures to ensure compliance with standard research ethics can be experienced as hurdles or obstacles to working scientists. It is thus important both to underline the importance of research ethics, while stressing that many of the ethical issues discussed in other domains will not be amenable to the control measures associated with research integrity, biosafety and the use of human or animal subjects.

Basic ethical requirements for scientific integrity can be seen as baseline for standard research ethics. Research institutions and funding agencies have introduced procedures to reinforce the importance of honesty, careful record keeping, cautions against plagiarism and disclosure of potential conflicts of interest (Bulger, Heitman, and Reiser Citation2002). Although these requirements mostly go without saying, the multi- and inter-disciplinary nature of gene drive work may raise a few additional subtleties. Disciplines differ in their approach to giving authorship credit, for example. There are also emerging issues such as rules and standards for sharing of data (Caulfield et al. Citation2008).

As noted already, ethical questions associated with the use of human subjects in connection with gene drive research have already been broached within the peer-reviewed literature. Procedures for the protection of human subjects became mandatory at institutions utilizing public funds throughout the industrialized world after scandalous abuses were publicized. Nazi experiments on prisoners and occupants of concentration camps were among the most egregious, but a longitudinal study of syphilis in a population of African-American males near Tuskegee, AL was the proximate cause for the creation of institutional review boards in the United States. These abuses were chronicled in a report to the National Institutes of Health that is colloquially known as the Belmont Report (see NCPHSBBR Citation1978). One basic ethical principle that emerged was that human beings should not be exposed to risks as a result of their participation in research without informed consent (Macklin Citation1992; Beauchamp Citation2003).

Biological researchers will have some prior familiarity with the procedures for securing permission to conduct research involving human subjects, and it is not necessary to undertake extensive discussion of these requirements. The primacy of the potential research subject’s perspective has significance for the ethics of gene drives. The NASEM (Citation2016) report suggests that the primary mechanism of exposure to human subjects is very likely to be environmental, and key questions are likely to involve containment. For example, mosquitoes modified so that they will not vector dengue or malaria necessarily expose human populations to bites from the engineered (and non-disease vectoring) mosquitoes. Hence there are unavoidable human subjects issues, even if the modification is intended to be beneficial and the chance of hazards materializing is estimated to be quite low. Although anomalies have occurred in the post-Belmont era (see Kipnis, King, and Nelson Citation2006), it is difficult to imagine a properly functioning institutional review board allowing research to proceed in the absence of some protocol for securing informed consent from human subjects who might be bitten by genetically engineered mosquitoes, and many other gene drives will encounter a similar need for protocols (Resnik Citation2014).

Second domain: identification and interpretation of risk

As the NASEM (Citation2016) report makes clear, gene drives might pose risks of several kinds depending in part on their design, in part on the domain of application. Some product and process risks are subject to regulatory oversight (a governance mechanism), but the responsibility to anticipate and understand the potential for unwanted or adverse impact from deployment of gene drives is both legal and moral. Articulated as an element of gene drive ethics, the need for assessing and managing risk is a responsibility that falls on key actors including researchers, private firms, scientific bodies and governance institutions. Although the NASEM report is clear in stating that assessing and managing risks is an ethical responsibility, it is less clear in articulating the ethical issues that are intrinsic to risk assessment.

Risk assessment has been characterized in terms of four inter-related activities: hazard identification, exposure quantification, risk management and risk communication (Stern and Fineberg Citation1996). Although this schema is simplistic, it is nonetheless useful for indicating key junctures at which ethics enter into risk assessment activities. Hazard identification consists of discovering adverse outcomes that are possible; exposure quantification is the process of determining how likely they are to occur under given circumstances. These technical phases of risk assessment comprise risk characterization, which is typically distinguished from decision-making about what should be done, that is, risk management. The separation of these phases implies both an ethical principle and an active governance policy in the regulatory context. It is intended to shield science-based inquiry into the nature and extent of risks from undue influence (Starr and Whipple Citation1980). In regulatory agencies, hazard identification and exposure quantification are conceptualized as non-political activities that produce information for risk managers who are often political appointees within regulatory agencies. These managers make decisions that necessarily reflect value judgments. In this context, risk management decisions may reflect judgments about the appropriate role of regulation vis a vis markets or the courts in affecting the proper response to those risks that have been identified through risk characterization ().

Figure 1. The risk assessment schema.

Figure 1. The risk assessment schema.

This is not to say that risk characterization is devoid of value judgments. Most obviously, hazard identification presumes to identify those potential outcomes that are adverse, unwanted or ‘bad.’ Thus, a value judgment is implicit in the very idea of hazard. From a practical standpoint, this value judgment is often determined by the legal mandate for the regulatory process. The authority of regulatory agencies in the United States was traditionally limited to impacts on human health. Even the class of environmental impacts often adverts to health impacts associated with pollution, though there are cases where effect on a ‘non-target species’ (a species that the intervention was not intended to affect) is included. Given the delimitation to health impacts, biological and biomedical expertise becomes crucial for ascertaining the mechanisms that can lead to injury or disease. From a more philosophical (which is to say ethical) perspective, the question of which outcomes should be classified as hazards should be regarded as worthy of deliberation and debate (Cranor Citation1997; Thompson Citation2003).

The value judgments in exposure quantification are often more subtle, but pervasive, nonetheless. Ascertaining the likelihood that a hazard will materialize requires some specification of the circumstances in which a product or process will be used. In one of the early ethical critiques of a technical risk assessment, Brunk, Haworth, and Lee (Citation1991) determined that exposure to Alachlor, an agricultural pesticide, had been quantified under the assumption that users would follow label instructions (an assumption that further implied that users could read instructions written in English). Given the actual circumstances in which Alachlor was being used during the time this risk assessment had been conducted, these assumptions were unrealistic, leading to a significant underestimate of risk. More generally, exposure assessments require value judgments that are inherent within statistical analysis, such as characterization of a reference population and the identification of stopping rules (Douglas Citation2000; Thompson Citation2003; Elliott Citation2009).

To summarize, risk characterization does involve value judgments. Hence, an early tendency to represent the relation to risk management as a one-way flow out of the technical processes of hazard identification and exposure quantification and to the admittedly normative activities of risk management is misleading (see Rowe Citation1977). A similarly misleading practice of early work on risk assessment was either to omit communication entirely, or to represent it as an activity within risk management. On this construal, communicating to the public was viewed as one of several activities that might be employed to make the best use of information gleaned through risk characterization. It is now recognized that non-expert members of the public may be in possession of information that is crucial to each of the other three phases in a risk assessment process (Wynne Citation1989; Stern and Fineberg Citation1996). In each of these cases, insistence upon the putatively ‘value-free’ nature of science may have led to misunderstandings that resulted in faulty risk assessment. Such failures are relevant within other domains of the ethics for gene drives, as will be discussed below.

In fact, frequently mentioned ethical questions can be comfortably accommodated within this framework. The NASEM (Citation2016) report notes philosophical debates over the kind of protection that is owed to plants and non-human animals, as well as to ecosystems or collective entities such as species. These are, in effect, debates over what counts as a hazard from the perspective of risk assessment. The framework of risk can thus accommodate much of the debate within environmental philosophy over the intrinsic vs. instrumental value of biodiversity (Norton Citation2015). That potential for ‘dual use’ or intentional harmful and nefarious applications of a technology are not considered within the standard institutional review procedures of research ethics is noted in the report, and can also be interpreted and analyzed as a hazard. A number of emerging technologies are now being debated in connection with their potential for intentional abuse as well as their potentially destabilizing impact on international security and the policing abilities of national states (Miller and Selgelid Citation2007; NASEM Citation2016, 69–70).

The ethical issues associated with gene drive risks become more complex when social amplification is taken into account. ‘Social amplification of risk’ is a term of art used in the social sciences to cover a variety of social and psychological factors that influence people’s perception or judgment about the level, seriousness and degree of threat associated with a given risk or risky situation. Early on these factors were associated with risk perception, but the shift toward terminology of risk amplification was made in response to a tendency for technical experts to describe their own scientifically informed estimates of risk as ‘reality,’ in contrast to the (mere) perception by lay subjects and the general public (Kasperson et al. Citation1988; Kasperson and Kasperson Citation1991). In this literature, factors such as age, level of education, race and gender have been correlated with subjects’ rankings or reports on the amount of risk associated with numerous life events ranging from exposure to radiation from nuclear power to lifetime cancer risk to getting hit by a meteorite (Slovic Citation1987). Research in the social sciences revealed that technically trained experts were not immune to influence by these social and psychological factors (Slovic Citation1999).

In addition, research in sociology, anthropology and communication studies has identified features of risk (such as whether it is voluntarily accepted, whether is tied to catastrophic events or whether it is a ‘dread’ or especially feared risk) that tend to influence subjects’ judgment about the seriousness of a risk. Much research has emphasized how social factors can lead subjects to reach judgments that differ from those of scientists or engineers having expertise in the relevant subject matter (radiological hazards, toxicology, molecular biology, etc.). The literature recognizes that social factors can both amplify or increase a subject’s view of how risky a given technology or circumstance is when compared to experts, or attenuate risk perception, leading subjects to underestimate the degree or seriousness of a given risk (Kasperson and Kasperson Citation1991; Kasperson et al. Citation2003).

Many studies in the social amplification of risk have emphasized the risks of emerging technology, but there are routine cases in healthcare and public health where social factors such as race, gender and age are tied to systemic over or underestimates of risk, leading to important ethical issues in public education and health promotion campaigns. There are also ways in which the standard social science definition of this field could usefully be expanded. For example, important case studies demonstrate how social factors equipped non-experts with information highly pertinent to the estimation of exposure to biological and radiological hazards (Wynne Citation1989). In addition, other studies show that non-experts tend to incorporate a much broader array of hazards into their risk attitudes, including, for example, exposures to dislocation or exploitation through economic activity or political decision-making. Socio-cultural factors contribute to the way that individuals or social groups conceptualize adversity. Cultures that conceptualize non-humans (animals, plants, sacred places) as having characteristics of personhood may be open to including a much broader classification of potential outcomes under the heading of a hazard, for example (Cranor Citation1997). Finally, social amplification or attenuation clearly occurs when subjects experience their relationship with others through the categories of trust and solidarity, on the one hand, or through vulnerability and exploitation, on the other. Thus differential perceptions of risks from participation in research among black Americans may well be justified given the Tuskegee experience and similar episodes (Jones Citation1993).

The NASEM (Citation2016) report arguably lacks a full appreciation of risk amplification and its implications for ethics. Given that social factors clearly influence people’s estimate of the nature and extent of risk, we should expect that publics who encounter gene drives either in the abstract (through reading, mass media or campaigns organized by activist groups) or through concrete encounters with research or public health teams developing gene drives will develop estimates of the risk of gene drives that diverge from that of the expert community. The first order ethical question for gene drive technology is to arrive at a judgment as to the legitimacy of the social factors that are introduced by social amplification of risk approach. In short, we must ask whether these non-experts know something that the experts don’t. Taking this question seriously is a component of ethics because dismissing non-expert knowledge is a form of hubris. Avoiding hubris is especially crucial when subjects come from groups that have a history of marginalization or high cultural divergence from the expert group (Wynne Citation2001). Marginalized groups may have survived through strategies of withholding information from dominant social groups and may thus be understandably reluctant to share their knowledge base fully with outgroups (Agrawal Citation1995). Cultural difference may pose a barrier to full communication about hazards and mechanisms of exposure (Peterson Citation1997). At the end of the day, experts must revisit basic research ethics (first domain) in light of the possibility that others do in fact have new information that is pertinent to the expert assessment of hazard and exposure.

Beyond this first order problem of determining whether social amplification is revealing new data that must be integrated into risk characterization, ethics must be involved in making a judgment as to whether a third party’s perception of risk is reasonable or unreasonable. This judgment will be affected by differing conceptions of rationality, hence it may be both philosophically controversial and culturally relative. Nevertheless, it is important because further activities in gene drive research and development are contingent upon whether subjects exposed to risk, outsiders and the general public are being influenced by social factors that are highly reasonable, given experience and knowledge base, or utterly fantastic. Alternatively, expressions of fear or concern may be motivated by strategic considerations that can themselves be judged ethically legitimate or pernicious (Shrader-Frechette Citation1991; Scheman Citation2011). Given points already noted several times above, for example, it is highly reasonable for those who have been exploited on the basis of race or gender to regard techno-scientific projects with initial suspicion, and this suspicion is experienced by them as a heightened sense of being at risk. This feeling of being at risk because of past experience with scientists is readily translated into their estimate of the risk associated with the technology or scientific project, in this case, gene drives. An extended period of trust-building (coupled with the relinquishment of power dynamics that the scientific team may not even be aware that they possess) will be ethically required.

In contrast, experience with other emerging technologies including agricultural biotechnology, nuclear power, chemical technology and nanotechnology shows that some social groups who oppose the technology will utilize social and psychological amplification of risk to heighten outrage and increase the public’s sense of being at risk from the technology. Such groups may have objectives largely unrelated to physical risks. They are, in fact, often motivated by unfair power dynamics and socio-economic risks (Munro and Schurman Citation2008). In some extreme cases, opponents of technology may be motivated by nothing more than their self-interest in building an organization and attracting funding. For example, anti-hunger civil society groups became so pervasive in the first decade of the twenty-first century that they engendered a ‘credibility crisis’ for NGOs without regard to their domain of activism (McGann and Johnstone Citation2005). Making the ethical judgment that such motivations are ‘unreasonable,’ is neither simple nor straightforward, for groups that are quite willing to use ‘scare tactics’ may be pursuing social objectives that fall well within the range of ethically reputable social goals. At the same time, deployment of social amplification may also reflect longstanding ethical grievances that have their source in governance and democracy issues, discussed below (see Whyte Citation2017 for a related discussion). Nevertheless, what is ethically required in interacting with this range of socially amplified risk perceptions should be understood as quite different from what is needed to deal with others whose identity and social role leads them to judge risks in ways that are odds with scientific experts.

Third domain: fiduciary responsibility

The ethical questions raised in the previous domain concern the characterization and quantification of risk, as well as extra-scientific factors that complicate risk assessment. They do not address ethical responsibilities to do something about risk – if any – that would fall on gene drive researchers. Among other things, the term ‘fiduciary responsibility’ indicates a situation in which the bearer of said responsibility is legally or morally obligated to act in the best interests of another party, often a client but potentially the public at large (Kaplan Citation1976). The term has come into recent use in connection with a proposal that financial advisors should be legally obligated to tell clients whether products being offered are consistent with the client’s best interest (Guerriero Citation2017). Although engineers and physicians operate under explicit criteria of fiduciary responsibility for patients and clients, respectively, there has been relatively little direct discussion of the fiduciary responsibilities of working scientists, and whether the conditions of their employment or funding are relevant to their having any obligation to serve the overall public good.

Sankar and Cho (Citation2015) characterize scientists’ duty to serve the public good as ‘social responsibility.’ They note that successive editions of the National Academy of Science publication On Being a Scientist (Citation1995) place increasing stress on duties to place the interests of affected parties above other concerns. Although neither On Being a Scientist (NAS Citation1995) nor Gene Drives on the Horizon (NASEM Citation2016) utilizes the term fiduciary, there are many junctures at which the text seems to impute such responsibilities to gene drive research teams. In the chapter on human values, the report goes so far as to assert that norms of working for public rather than personal benefit are viewed as intrinsic to the idea of science, and widely accepted by working scientists (NASEM Citation2016, 67). A commentary on Sankar and Cho’s discussion of social responsibility links their concept to literature in business ethics and corporate responsibility that frames these public duties in terms of fiduciary requirements (Conley et al. Citation2015). Although the terminology of a fiduciary is not standard, it will be utilized here to emphasize the relational dimension of this responsibility.

The commitment to a fiduciary relationship to stakeholders and the public is implicit in the NASEM (Citation2016) report’s sections on risk assessment, where hazards (physical, financial or reputational) to individuals and firms developing gene drives are not even mentioned. However, it is Chapter 7 of Gene Drives on the Horizon that includes the most thorough discussion of fiduciary responsibilities, which are described as reasons why scientific teams working on gene drives have obligations to conduct engagement activities with stakeholders, communities and publics. Engagement is described as an activity that can bring key information into a research project and as essential to a gene drive project’s social acceptability, but it is also characterized as a matter of respect. The chapter concludes by stating that ‘The question is not whether to engage communities, stakeholders and publics in decisions about gene drive technologies, but how best to do so’ (NASEM Citation2016, 141).

However, it is not obvious that scientists or technology developers working in other areas accept such a broad conception of their responsibility to those who will be affected by their work. In fact, one model for R&D activity presumes that while technology developers must certainly comply with the law, it is the regulatory agencies that have the responsibility to look out for the interests of affected parties. Novel technologies developed as commercial products must meet standards of merchantability, and must not be misrepresented. These legal requirements do represent minimal fiduciary duties for firms operating in the private sector, but here a scientist who obeys the law has no fiduciary relationship with people outside her lab. What is more, firms are disciplined by the market itself, which will (in most cases) not support harmful or defective products (though, of course, tobacco products represent one of many exceptions to this rule). Market discipline suggests that firms will be more successful if they bear their customers’ interests in mind, but this is simply the self-interest of an actor offering a product. It does not connote an ethical duty to develop or market products that are in the interests of the people who buy them. Caveat emptor is the rule, and regulatory agencies are designed to ensure that at least some of the most crucial health and safety interests of communities will be taken into account before a novel technology is placed on the market (see Percival et al. Citation2013).

Since private firms are already operating in gene drive research, the suggestion that gene drive scientists should regard themselves as fiduciaries for certain third parties (much less broader publics) should probably be regarded as a topic for future research, rather than something that can be assumed in the manner suggested by the NASEM (Citation2016) report. One argument favoring a fiduciary responsibility is that the scientific community must assume a responsibility that issues from their particular forms of expertise. Here, the fiduciary relationship is based on the fact that scientists are in a position to know things about the potential risks of gene drives that very few members of the public could be expected to know. There are areas of science, such as toxicology, where individual practicing scientists do contribute to public debates and undertake pro bono work largely because their expertise places them in a position to do so. The work of Mark Edwards and Amy Pruden in connection with pollutants in city water supplies is an example of research that is funded and published through mainstream disciplinary institutions, yet is understood by the scientists doing this work as fulfilling fiduciary relationship. Edwards’ group at Virgina Tech maintains a public website with frequent updates as a means to fulfilling the team’s fiduciary role (Virginia Tech Research Team Citation2017).

Given this way of construing fiduciary responsibility, the ethical issues that arise in attempts to measure, rank or manage risks from gene drive research (as well as from any eventual deployment of a gene drive technology) differ from the fiduciary responsibilities of various actors in gene drive research. Expertise – the possession of a relevant knowledge base – is the basis for assignment of this duty to specific individuals and groups. As an ethical concept, fiduciary responsibility means that those who have the ability to assess and manage risks from gene drives have a duty to use this knowledge to protect the interests of those who might be affected by gene drives, both during the research phase and after their deployment in practical use (Thompson Citation2008). However, it is doubtful that this is a responsibility that can be distributed to every individual working in gene drive research. The relationship might be more usefully described as one holding between the community of gene drive researchers and the various individuals and publics to whom fiduciary relationships are owed. As long as a few individuals or groups accept and discharge these responsibilities, there would be no obligation for every individual bench scientist to do so. However, if no one acts on behalf of stakeholders and the public, there has been a general ethical failure on the part of the community as a whole (Thompson Citation2012).

In addition, some of the issues in the NASEM (Citation2016) report that are mentioned under the heading of social justice might be characterized under the heading of a fiduciary responsibility that is shared by the science community as a whole (others fall into ‘democratizing technology,’ below). Access to a successful technology and recompense for damages are not equally distributed among all persons, and adverse outcomes are too frequently skewed along lines of wealth, ethnicity and gender. The NASEM report suggests that offering those who express justice concerns to have a voice in decision-making may be the most effective way to operationalize these responsibilities (NASEM Citation2016, 76). However, like fiduciary responsibilities to monitor or communicate about gene drive risks, the responsibility to provide opportunities for voice is not something that can be meaningfully assigned to individual scientists. Here, too, conceptualizing a fiduciary relationship between the gene drive science community and affected publics may be a more useful approach. In the framework being offered here, there is additional stress being laid on the distinction between scholarly or activist debate over the implications of technical change and the nature of a scientific group’s responsibility to either inform or participate in that debate.

Fourth domain: democratizing technology

Although regulatory agencies screen technological innovations for risks to human health and the environment, members of the public may not be satisfied. They may suspect that regulatory oversight is inadequate, or they may have concerns about impacts that are simply not subjected to regulatory purview. Regulatory politics is thus an extension of ethics in that differing conceptions of appropriate state action will be taken by contending groups hoping to influence the democratic process (Gutmann and Thompson Citation1997). The phrase ‘democratizing technology’ highlights the role of the lay (or non-expert) public in oversight, governance and routing the course of technical change. In one sense, questions about the role of the public in facilitating or retarding technical change are at least as old as the industrial revolution. ‘Ned Ludd’ may have been a pseudonym, but the Luddites were craft weavers who objected to the introduction of powered looms in the British textile industry. They wanted to stop this innovation because of its social consequences: they stood to lose their jobs (Boal Citation1995). There has been controversy over whether and under what conditions a society might relinquish technological opportunities ever since.

The NASEM (Citation2016) report alludes to the well-organized political opposition to gene drives both in the governance chapter and in its chapter on engaging interested publics. A full discussion of the issues lies well beyond the scope of this summary treatment, but at least three features bear on the conceptualization of ethical issues for gene drives. First, the recent history of gene technology in the food system has created both a legacy of suspicion and established organizations and pressure groups that will be attentive to new technologies utilizing genetic engineering. Second, the study of public attitudes toward and acceptance of emerging technology has become established within the social sciences. There are thus tools for understanding the broader impact of technology that did not exist even two decades ago, and a constituency for knowledge about the public’s point of view. Finally, there are straightforward ethical reasons for believing that many technologies should be subjected to much broader forms of public oversight and approval than they have been in the past. Each of these features are mentioned in the NASEM report. They are discussed briefly in what follows.

As is widely known, genetically engineered agricultural crops have encountered significant resistance in many parts of the world, retarding most applications in Europe (Bauer and Gaskell Citation2002) and placing barriers to its use in food crops in India (Herring Citation2015) and China (Liu, Pieniak, and Verbeke Citation2014). The bases for opposition to gene transfer in agriculture are broad and diverse. They include both risk-based and moral concerns as well as the way that this technology has affected the balance of economic power among agricultural-input firms and between input suppliers and farmers (Thompson and Hannah Citation2008). Public resistance to genetic engineering in agriculture does not imply unilateral opposition to applications of gene technology. Applications in biomedical fields have not been targets of organized resistance, and conservation-oriented uses of genetic engineering have shown indications of public support (Nielsen, Jelsø, and Öhman Citation2002). Nevertheless, the controversy over agricultural biotechnology testifies to the potential for disagreements about the appropriate governance of gene technologies, and the history of resistance has also created an infrastructure of organizations and individuals who have the will, the knowledge and the means to oppose gene technologies in the public arena (Munro and Schurman Citation2008; Hicks Citation2017).

The controversy over agricultural biotechnology also stimulated growth among social scientists who have developed methodologies to gauge public acceptance, as well as to engage members of the public in deliberations over the management and political decision for technical change. This literature is very large, and has expanded especially in the wake of social science research opportunities that became available in conjunction with initiatives in nanotechnology and synthetic biology. Stated perhaps too succinctly, research conducted under the aegis of constructive technology assessment (Rip and te Kulve Citation2008), responsible innovation (Stilgoe, Owen, and Macnaghten Citation2013), or anticipatory governance (Guston Citation2014) blends empirical study of public attitudes toward emerging technology within a framework that is intended to aid researchers, sponsors of research (including public, non-profit and for-profit organizations) and the various offices of government in the joint project of bringing forth technical changes that enjoy broad public support and that meet compelling needs.

Ethics is implicit within all of these frameworks, both as a form of inquiry or public discourse that is involved in the identification and justification of those needs classified as ‘compelling’ (Sclove Citation1995), and as the basis for judging the entire project of technology development as a legitimate expression of the public will (Habermas Citation1975). Thus, the relatively recent emergence of methods to incorporate both social science and ethical reflection within the innovation process for nanotechnology and synthetic biology can be viewed as a response to criticisms that have been made sporadically since the nineteenth century. If science and technology are indeed to be regarded as engines of social progress, there is a need to ensure that their applications and outcomes are consistent with those philosophical, ethical values that constitute progress. For at least the last three centuries, these values have been seen to rest on notions of consent – not, to be sure, the same form of consent that has been institutionalized in human subjects review, but the notion that the power to implement widespread social changes can only be regarded as justified when it is exercised in a manner that is consistent with the broad agreement of those who are affected by those changes – consent of the governed. This is the third and most general sense in which ethics calls for an effort to democratize technology. It is a broad ranging political effort to ensure that enormous capabilities of science and technology are exercised in a manner that is consistent with what people actually want (Kitcher Citation2001).

This is not the place for a full-fledged discussion of contending political philosophies, but two broadly stated philosophical debates influence the way that democratizing technology will affect gene drives. In the first debate, one influential perspective on the link between pursuit of interests and the democratic process holds that individuals and social groups are always fully justified to be as aggressive as the law allows in pursuit of government policies that are favorable to their interests. This includes exploiting the naiveté or ignorance of others, making statements that are disingenuous or misleading and refusing to disclose matters that might not be favorable to their cause. The other perspective advocates a more idealistic conception of public debate, and advocates a more fully informed and less partisan conception of public dialog (Ball, Dagger, and O’Neill Citation2014). Scientific bodies have traditionally aligned with this latter perspective, though the same cannot be said for corporations and other private groups with a specific interest in seeing a given technology move forward. Developers for gene drives will need to decide whether they are scientists in this sense, or partisan advocates (see Pielke Citation2007).

The second debate is the more familiar political contest between government action and laissez-faire. It is not always obvious which approach is more favorable for technological innovation. A review of the various rationales for governmental promotion or discouragement of a given technological change, on the one hand, and for allowing the existing distribution of wealth, property rights and market preferences to guide outcomes, on the other hand, would quickly become tedious. Yet this debate does have implications for the governance of gene drives. Although the philosophical perspectives that contend with one another in the domain of political theory are multi-dimensional, the distribution of benefits and burdens across social classes or race, gender and ethnic identities is a central theme (Sandel Citation2010). Suffice it to say that engagement in these political issues really is an inevitable aspect of the social ethics for developing gene drive technology, as it is for virtually all technology of any kind. There may be sound reasons for desisting from active participation in the most public forms of political debate, yet developers of gene drive technology should not delude themselves into thinking that such matters have no relevance for their technology. Gene Drives on the Horizon (Citation2016) arguably underplays the potential for this kind of politicization of science. Debate over who wins and who loses from any particular application of a gene drive will be a large topic – so large that many will associate the entirety of ethics with distributive justice. One aim of this framework is to ensure that a more comprehensive perspective is taken.

Fifth domain: epistemology and power

The links between ethics and epistemology are not well reflected in the NASEM (Citation2016) report. Recent work in the theory of knowledge and the philosophy of science has emphasized the fact that science is a socially organized activity. As such, scientific research is likely to reflect values (for good or ill) that are implicit within the social processes that support it, be they the organization of universities and laboratories or the political processes that give rise to research opportunities and its financial support (Longino Citation1994; Anderson Citation1995). Furthermore, the ‘knowability’ of some facts or data can depend upon occupying a particular social position or embodied experience. Excluding groups or individuals who represent the standpoint or situation from which such things can be known has the effect of systematically reproducing ignorance. Thus, for example, Tuana (Citation2004, Citation2006) has traced large gaps in both scientific and social knowledge about women’s health and even the connections between their mental outlook and female physiology. Her argument, however, is that these should not be regarded simply as gaps, but as systematically produced areas of ignorance, as male dominance in science and social norms combined to preclude inquiry and knowledge transmission about a subject that could have readily been made available for research. Exclusion of racial groups can produce similar lacunae in both knowledge and in research procedure (Dotson Citation2014). This group of theorists stressing standpoint has critiqued the way that scientific knowledge production has reflected the embodied experience of white males. They note that even unconscious neglect of excluded standpoints has often reinforced the social institutions that reproduce privileged access to research opportunities, not to mention positions of social and economic power (Harding Citation1998).

This work in feminist epistemology and the philosophy of science provides a basis for deeper and more careful consideration of research methods than is typically entailed by informed consent procedures that are entailed by human subjects approval. There are not only links between this kind of feminist or post-colonial critique and the abuses that gave rise to the Belmont report, but also features that have been discussed above in connection to democratizing technology and the social amplification of risk. The Tuskegee fiasco is generally theorized as an abuse of informed consent (Beauchamp Citation2003), yet it seems unlikely that this error would have been made if black scientists had been adequately represented in the teams conducting the studies. Similarly, if women tend to apply different standards when judging the relative seriousness of certain types of risk, there are at least prima facie reasons to consider the possibility that distinctive features in the gendered experience of women or their social exclusion may be contributing to these differences. There is thus a sense in which many of the issues already discussed can be brought under the heading of these novel ways to conceptualize the knowledge production process. What feminist epistemology adds is a set of reasons to probe the questions more deeply and to ask whether there may be ways in which the concepts of objectivity that have long governed research in the bio-physical sciences may actually be implicated in social practices that are ethically indefensible.

In this connection, two feminist theorists are especially worth noting. Studies by Longino have examined how the ideal of value-free science has perpetuated a set of unexamined value-commitments within bio-physical research methodologies. Longino (Citation1990) recommends more straightforward and direct articulation of values guiding research, including both social goals and epistemic values such as the reproducibility of results. For projects such as gene drives, where the rationale for research is strongly shaped by technologically supported public health initiatives, the relevance of direct articulation and discussion of driving values would appear obvious. More recent work by Scheman (Citation2011) has examined how traditional notions of scientific objectivity relied implicitly on the judgment of the scientific community. She reveals that even here, the underlying value commitment was to nurturing and preserving trust relationships among members of that community. Scheman suggests that scientific projects having broader impacts must reconceptualize the relevant community and come to view the cultivation of trust relationships among the larger community of people for whom knowledge generation is undertaken. She argues that this must be seen as both an ethical and a methodological necessity.

Sixth domain: procedural ethics

The ethical issues discussed at each of the previous levels might be characterized as substantive in the sense that they take up specific ethical concerns: they are about some specific ethical subject matter. In contrast, this last category is purely procedural in the sense that it concerns the way that gene drive research teams need to constitute and organize themselves in order to deal with all the previous levels. Procedural questions were, in fact, the main concern of the World Health Organization framework document and a more detailed account of how teams might approach them was provided there (WHO Citation2014). A full discussion of procedural questions would in fact begin to broach questions such as the need or wisdom of formalized review procedures and the possibility that oversight panels might be needed. This section addresses one facet, offering a very brief discussion of team composition and interaction.

The discussion of the first five domains suggests that expertise beyond that of most biologically trained scientists will be relevant to a full consideration of the ethical questions being raised. Some of this expertise may reside among scholars who study human cultures or decision-making processes, while other types of expertise involve convening community groups and organizing activities with significant participation by citizens, government officials, civic leaders and possibly civil society organizations representing specific topics of social concern. A note of caution must be interjected at this point. While such skills may be needed to effectively discharge key ethical responsibilities listed above, teams should not presume that these various levels of ethical question can simply be delegated to specialists, so that biologically trained scientists can be free of them. Indeed, the framework developed in this paper leaves the difficult work of integrating these diverse bodies of scholarship to gene drive research teams themselves.

Only a few suggestive comments on integration are possible. First, some attention to standard research ethics is now universally required by groups receiving public funds, whether through national governments or international organizations such as WHO. Standard research ethics is too often structured simply as a form of regulatory compliance (Rollin Citation2006). Nevertheless, a research group that engages in thoughtful consideration of human or animal subjects affected by their activities will find their discussions rapidly moving on to other areas discussed in the other four domains. Although risk assessment as such is not a formal requirement, documents such as the WHO framework for genetically engineered mosquitoes (WHO Citation2014) and the NASEM (Citation2016) report demonstrate that the need for risk assessment is widely accepted. The discussion provided here moves beyond the NASEM report; risk assessments will be improved both in terms of the technical accuracy as well as in terms of their ability to serve as a basis for communicating and engaging publics when ethics is imbedded at key junctures. Tuana (Citation2013) has made similar comments with respect to the role of ethics in climate science.

Very few biological scientists have reflected on the significance that the social construction of risk has for developing a comprehensive risk assessment, and these themes are not included in the NASEM (Citation2016) report. When one investigates the way that risks arise in part because of the way that people are socially related to one another, and because one person or group knows things that another group does not, the activity of risk assessment itself raises research questions that will lead into the other domains of ethics that are described in this report. If fiduciary responsibility is taken for granted (as it appears to be in the NASEM report), the questions are collapsed into assessing and communicating about risks. In the framework developed here, fiduciary responsibility is broken out as a separate domain in part to call attention to some underappreciated ethical questions, but also because risk assessment itself will take a different shape if it is construed as meeting regulatory review requirements, as opposed to discharging a more open-ended set of responsibilities to interested parties. Asking questions about the relationship between science and the various publics that will use, be affected by, take an interest in, benefit from or be at risk from gene drives does lead naturally into questions of governance and public engagement. Many of the ethical questions that are embedded in these activities (currently dominated by the social sciences) are addressed in the literatures discussed as ‘democratizing technology.’ There is, in short, a flow that connects these questions.

One theme stressed throughout this framework for ethics is that ignorance and uncertainties reflect social relationships. Although people trained in scientific disciplines may not think of themselves as power brokers, their knowledge relates them to non-specialists as people who will be consulted and listened to in key decision-making situations. Scientists are, in this sense, at least, holders of power. And as such, non-specialists may rightly take themselves to be vulnerable to bad judgments or ethical lapses on the part of scientists. They take themselves to be at risk, and this will be translated and expressed as a risk that derives from gene drives themselves. Although scientists who work in risk assessment are becoming increasingly appreciative of the way in which their own expertise is plagued by gaps, the sense in which research methods themselves can create ignorance is rarely reflected. In short, explicit attention to perspectives being raised in feminist epistemology and critical race studies could support some of the integrative bridge-building that is needed to knit the other domains together.

To sum up, there are two related sets of questions that call for reflection on the procedure being followed in organizing gene drive research teams. The first concerns incorporating relevant levels of ethical expertise either within a team, or developing plans for consultation with individuals and groups (including members of the lay public who will not consider themselves to have ethical expertise). Should ethicists be embedded throughout the project, or are there key junctures at which specific types of additional ethics-oriented input can be supplied either through public engagement or through research activities that are organized as adjuncts to the main biological protocols? The second procedural need is to develop plans that both engage these ethical questions at the level of detail required and that also keeps other members of the team (and especially biologically trained scientists in leadership positions) appraised and involved in the consideration of ethical issues. Ethics cannot be treated as tick-box satisfied when some added publications have appeared in journals that no one in the biological sciences will ever read (Rollin Citation2006).

Conclusion

Each of the six domains for ethics represents a distinct set of issues and questions that are very likely to evolve as gene drive projects move from initial proof-of-concept through the phases of field testing, encounters with the regulatory process and engagement with broader publics. Research teams must certainly expect that ‘broader publics’ will include some groups that adopt tactics to resist the eventual deployment of gene drives. This expectation triggers ethical issues that have been characterized as ‘democratizing technology’ or meeting the norms of democratic governance, and ‘social amplification’ or dealing with the complexities that will be introduced by the social and psychological reception of gene drive risks. Feminist epistemology suggests a framework for considering how these issues could and should be addressed within the norms that define a given research process as rigorous objective science. Finally, there are both very standard and well-known issues and a set of organizational issues that should probably be addressed even within the context of developing scientific teams to pursue gene drive research. Like most of ethical questions, the issues raised at each level are unlikely to result in definitive solutions. However, my intent has not been to burden gene drive science with impossible tasks, but rather to liberate it from paralysis that arises when such tasks are deferred.

Disclosure statement

No potential conflict of interest was reported by the author.

Notes on contributor

Paul B. Thompson holds the W. K. Kellogg Chair in Agricultural, Food and Community Ethics at Michigan State University, where he is also Professor of Philosophy, Community Sustainability and Agricultural, Food and Resource Economics.

Additional information

Funding

This work was supported by National Institute of Standards and Technology [grant number 70NANB15H339] and U.S. Department of Agriculture [grant number MICL02324].

References

  • Agrawal, A. 1995. “Dismantling the Divide Between Indigenous and Scientific Knowledge.” Development and Change 26: 413–439.
  • Anderson, E. 1995. “Knowledge, Human Interests, and Objectivity in Feminist Epistemology.” Philosophical Topics 23 (2): 27–58.
  • Ball, T., R. Dagger, and D. I. O’Neill. 2014. Political Ideologies and the Democratic Ideal. 9th ed. Boston: Pearson Education.
  • Bauer, M., and G. Gaskell, eds. 2002. Biotechnology – The Making of a Global Controversy. New York: Oxford University Press.
  • Beauchamp, T. 2003. “The Origins, Goals, and Core Commitments of the Belmont Report and Principles of Biomedical Ethics.” In The Story of Bioethics: From Seminal Works to Contemporary Explorations, edited by J. K. Walter, and E. P. Klein, 17–46. Washington, DC: Georgetown University Press.
  • Boal, I. A. 1995. “A Flow of Monsters: Luddism and Virtual Technologies.” In Resisting the Virtual Life: The Culture and Politics of Information, edited by J. Brook, and I. Boal, 3–16. San Francisco: City Lights.
  • Brunk, C., L. Haworth, and B. Lee. 1991. Value Assumptions in Risk Assessment: A Case Study of the Alachlor Controversy. Waterloo, ON: Wilfrid Laurier Press.
  • Buckley, J., P. B. Thompson, and K. P. Whyte. 2017. “Collingridge’s Dilemma and the Early Ethical Assessment of Emerging Technology: The Case of Nanotechnology Enabled Biosensers.” Technology in Society 48: 54–63.
  • Bulger, R. E., E. Heitman, S. J. Reiser, eds. 2002. The Ethical Dimensions of the Biological and Health Sciences. 2nd ed. New York: Cambridge University Press.
  • Caulfield, T., A. L. McGuire, M. Cho, J. A. Buchanan, M. M. Burgess, U. Danilczyk, Christina M. Diaz, et al. 2008. “Research Ethics Recommendations for Whole-Genome Research: Consensus Statement.” PLoS Biology 6 (3): e73. doi:10.1371/journal.pbio.0060073.
  • Charo, R. A., and H. T. Greely. 2015. “CRISPR Critters and CRISPR Cracks.” The American Journal of Bioethics 15: 11–17.
  • Conley, J. M., G. Lázaro-Muñoz, A. E. R. Prince, A. M. Davis, and R. J. Cadigan. 2015. “Scientific Social Responsibility: Lessons from the Corporate Social Responsibility Movement.” The American Journal of Bioethics 15: 64–66.
  • Cranor, C. F. 1997. “The Normative Nature of Risk Assessment: Features and Possibilities.” Risk: Health, Safety Environment 8 (2): 123–136.
  • Dotson, K. 2014. “Conceptualizing Epistemic Oppression.” Social Epistemology 28: 115–138.
  • Douglas, H. 2000. “Inductive Risk and Values in Science.” Philosophy of Science, 67: 559–579.
  • Elliott, K. C. 2009. “The Ethical Significance of Language in the Environmental Sciences: Case Studies from Pollution Research.” Ethics, Place & Environment 12: 157–173.
  • Flotte, Terence R. 2016. “Gene Drives: Biological Shield or Ecological Menace?” Human Gene Therapy 27 (8): 561–562.
  • Guerriero, E. 2017. “Fiduciary Responsibility, 1974 to 2017.” Journal of Financial Service Professionals 71 (2): 89–96.
  • Guston, D. H. 2014. “Understanding ‘Anticipatory Governance’.” Social Studies of Science 44: 218–242.
  • Gutmann, A., and D. Thompson. 1997. “Deliberating about Bioethics.” Hastings Center Report 27 (3): 38–41.
  • Habermas, J. 1975. Legitimation Crisis. T. McCarthy, Tr. Boston: Beacon Press.
  • Harding, S. 1998. Is Science Multi-Cultural? Postcolonialisms, Feminisms, and Epistemologies. Bloomington: University of Indiana Press.
  • Herring, R. J. 2015. “State Science, Risk and Agricultural Biotechnology: Bt Cotton to Bt Brinjal in India.” Journal of Peasant Studies 42: 159–186.
  • Hicks, D. J. 2017. “Genetically Engineered Crops, Inclusion and Democracy.” Perspectives on Science 25 (4): 488–520.
  • Jones, J. 1993. Bad Blood: The Tuskegee Syphilis Experiment. New and Expanded ed. New York: The Free Press.
  • Kaplan, S. A. 1976. “Fiduciary Responsibility in the Management of the Corporation.” The Business Lawyer 31: 883–927.
  • Kasperson, R. E., and J. X. Kasperson. 1991. “Hidden Hazards.” In Acceptable Evidence: Science and Values in Risk Management, edited by D. G. Mayo and R. D. Hollander, 13–46. New York: Oxford University Press.
  • Kasperson, J. X., R. E. Kasperson, N. Pidgeon, and P. Slovic. 2003. “The Social Amplification of Risk: Assessing Fifteen Years of Research and Theory.” In The Social Amplification of Risk, edited by N. Pidgeon, R. E. Kasperson, and P. Slovic, 13–46. New York: Cambridge U. Press.
  • Kasperson, R. E., O. Renn, P. Slovic, H. S. Brown, J. Emel, R. Goble, J. X. Kasperson, and Samuel Ratick. 1988. “The Social Amplification of Risk: A Conceptual Framework.” Risk Analysis 8: 177–187.
  • Kipnis, K., N. M. P. King, and R. M. Nelson. 2006. “An Open Letter to Institutional Review Boards Considering Northfield Laboratories’ PolyHeme® Trial.” The American Journal of Bioethics 6: 18–21.
  • Kitcher, P. 2001. Science, Truth and Democracy. New York: Oxford University Press.
  • Lavery, J. V., L. C. Harrington, and T. W. Scott. 2008. “Ethical, Social and Cultural Considerations for Site Selection for Research with Genetically Modified Mosquitoes.” The American Journal of Tropical Medicine and Hygiene 79: 312–318.
  • Liu, R., Z. Pieniak, and W. Verbeke. 2014. “Food-related Hazards in China: Consumers’ Perceptions of Risk and Trust in Information Sources.” Food Control 46: 291–298.
  • Longino, H. E. 1990. Science as Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton: Princeton University Press.
  • Longino, H. E. 1994. “In Search of Feminist Epistemology.” The Monist 77: 472–485.
  • Macklin, R. 1992. “Universality of the Nuremberg Code.” In The Nazi Doctors and the Nuremberg Code, edited by G. J. Annas, and M. A. Grodin, 240–257. New York: Oxford University Press.
  • Marshall, J. M., and C. E. Taylor. 2009. “Malaria Control with Transgenic Mosquitoes.” PLoS Medicine 6 (2): e1000020. doi:10.1371/journal.pmed.1000020.
  • McGann, J., and M. Johnstone. 2005. “The Power Shift and the NGO Credibility Crisis.” Brown Journal of World Affairs 11: 159–172.
  • Miller, S., and M. J. Selgelid. 2007. “Ethical and Philosophical Consideration of the Dual-Use Dilemma in the Biological Sciences.” Science and Engineering Ethics 13: 523–580.
  • Munro, William, and Rachel Schurman. 2008. “Sustaining Outrage: Cultural Capital, Strategic Location and Motivating Sensibilities in the U.S. Anti-Genetic Engineering Movement.” In The Fight Over Food: Producers, Consumers and Activists Challenge the Global Food System, edited by W. Wright and G. Middendorf, 145–176. University Park: The Penn State University Press.
  • NAS (National Academies of Science). 1995. On Being a Scientist. Washington, DC: National Academies Press.
  • NASEM (National Academies of Sciences, Engineering and Medicine). 2016. Gene Drives on the Horizon. Washington, DC: National Academies Press.
  • NCPHSBBR (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research). 1978. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research (Department of Health, Education, & Welfare Publication No. OS 78-0012).
  • Nielsen, T. H., E. Jelsø, and S. Öhman. 2002. “Traditional Blue and Modern Green Resistance.” In Biotechnology – The Making of a Global Controversy, edited by M. Bauer, and G. Gaskell, 179–202. Cambridge: Cambridge U. Press.
  • Norton, B. 2015. Sustainable Values, Sustainable Change. Chicago: University of Chicago Press.
  • Percival, R.V., C. H. Schroeder, A. S. Miller, and J. P. Leape. 2013. Environmental Regulation: Law, Science, and Policy. 13th ed. Indianapolis: Wolters Kluwer.
  • Peterson, T. R. 1997. Sharing the Earth: The Rhetoric of Sustainable Development. Columbia: University of South Carolina Press.
  • Pielke, Roger A. 2007. The Honest Broker: Making Sense of Science in Policy and Politics. Cambridge: Cambridge University Press.
  • Resnik, D. B. 2014. “Ethical Issues in Field Trials of Genetically Modified Disease-Resistant Mosquitoes.” Developing World Bioethics 14 (1): 37–46.
  • Rip, A. and H. te Kulve. 2008. “Constructive Technology Assessment and Socio-Technical Scenarios.” In The Yearbook of Nanotechnology in Society, Vol. 1, edited by E. Fisheret al., 49–70. Dordrecht, NL: Springer.
  • Rollin, B. E. 2006. Science and Ethics. New York: Cambridge University Press.
  • Rowe, W. D. 1977. The Anatomy of Risk. New York: John Wiley & Sons.
  • Sandel, M. J. 2010. Justice: What’s the Right Thing to Do? New York: Farrar, Straus and Giroux.
  • Sankar, P. L., and M. K. Cho. 2015. “Engineering Values into Genetic Engineering: A Proposed Analytic Framework for Scientific Social Responsibility.” The American Journal of Bioethics 15: 18–24.
  • Scheman, Naomi. 2011. Shifting Ground: Knowledge and Reality, Transgression and Trustworthiness. New York: Oxford University Press.
  • Sclove, R. 1995. Democracy and Technology. New York: Guilford Press.
  • Shrader-Frechette, K. 1991. Risk and Rationality: Philosophical Foundations for Populist Reforms. Berkeley: University of California Press.
  • Slovic, P. 1987. “Perception of Risk.” Science 236: 280–285.
  • Slovic, P. 1999. “Trust, Emotion, Sex, Politics, and Science: Surveying the Risk-Assessment Battlefield.” Risk Analysis 19: 689–701.
  • Starr, C., and C. Whipple. 1980. “Risks of Risk Decisions.” Science 208: 1114–1119.
  • Stern, P., and H. V. Fineberg, eds. 1996. Understanding Risk: Informing Decisions in a Democratic Society. Washington, DC: National Academy Press.
  • Stilgoe, J., R. Owen, and P. Macnaghten. 2013. “Developing a Framework for Responsible Innovation.” Research Policy 42 (9): 1568–1580.
  • Thompson, P. B. 2003. “Value Judgments and Risk Comparisons: The Case of Genetically Engineered Crops.” Plant Physiology 132: 10–16.
  • Thompson, P. B. 2008. “Current Ethical Issues in Animal Biotechnology.” Reproduction, Fertility and Development 20: 67–73.
  • Thompson, P. B. 2012. “Synthetic Biology Needs a Synthetic Bioethics.” Ethics, Policy and the Environment 15: 1–20.
  • Thompson, P. B., and W. Hannah. 2008. “Food and Agricultural Biotechnology: A Summary and Analysis of Ethical Concerns.” Advances in Biochemical Engineering and Biotechnology 111: 229–264.
  • Tuana, N. 2004. “Coming to Understand: Orgasm and the Epistemology of Ignorance.” Hypatia 19: 194–232.
  • Tuana, N. 2006. “The Speculum of Ignorance: The Women’s Health Movement and Epistemologies of Ignorance.” Hypatia 21 (3): 1–19.
  • Tuana, N. 2013. “Embedding Philosophers in the Practice of Science: Bringing Humanities to the Sciences.” Synthese 190: 1955–1973.
  • Virgina Tech Research Team. 2017. “Flint Water Study.” Accessed July 17, 2017. http://flintwaterstudy.org/.
  • Von Schomberg, R. 2007. “From the Ethics of Technology Towards an Ethics of Knowledge Policy & Knowledge Assessment.” Accessed January 17, 2017. SSRN: https://ssrn.com/abstract=2436380 or http://dx.doi.org/10.2139/ssrn.2436380.
  • Whyte, K. P. 2017. “The Dakota Access Pipeline, Environmental Injustice and U.S. Settler Colonialism.” Red Ink: International Journal of Indigenous Literature, Arts and Humanities 19 (1): 154–169.
  • World Health Organization (WHO)/Special Programme for Research and Training in Tropical Diseases (TDR)/Foundation of the National Institutes of Health (FNIH). 2014. “The Guidance Framework for Field Testing Genetically Engineered Mosquitoes.” TDR, June, 2014,169 pp. ISBN: ISBN 978 92 4 150748 6.  http://www.who.int/tdr/publications/year/2014/guide-fmrk-gm-mosquit/en/.
  • Wynne, B. 1989. “Sheepfarming After Chernobyl: A Case Study in Communicating Scientific Information.” Environment: Science and Policy for Sustainable Development 31 (2): 10–39.
  • Wynne, B. 2001. “Creating Public Alienation: Expert Cultures of Risk and Ethics on GMOs.” Science as Culture 10 (4): 445–481.