1,477
Views
8
CrossRef citations to date
0
Altmetric
OPEN PEER COMMENTARIES

Against the Precautionary Approach to Moral Status: The Case of Surrogates for Living Human Brains

This article refers to:
Human Brain Surrogates Research: The Onrushing Ethical Dilemma

My paper builds on the conceptual tools from three interrelated philosophical debates that—as I believe—may help structure important if chaotic discussions about surrogates for living human brains and resolve some practical issues related to regulatory matters. In particular, I refer to the discussions about the “moral precautionary principle” in research ethics (Koplin and Wilkinson Citation2019); about normative uncertainty in ethics (MacAskill, Bykvist, and Ord Citation2020), and about the inductive risk problem for animal welfare scientists (Birch Citation2018). I elucidate upon the possible meanings of the phrase “a too good human brain surrogate” used by Henry T. Greely (Citation2021), and I demonstrate that the evaluation of the practical and regulatory implications of the “goodness” of such surrogates created for research purposes should be sensitive to the possible consequences of two types of errors: the under-attribution and over-attribution of moral status to such beings. Many authors writing about this topic (including Greely Citation2021, but see also, e.g., Koplin and Savulescu Citation2019) concentrate only on the first type of error, neglecting the negative consequences of the second type, i.e., over-attribution.

Greely (Citation2021) reviews four types of surrogates for living human brains in human bodies (genetically edited nonhuman animals, human/nonhuman brain chimeras, human neural organoids, and living ex vivo human brain tissues) and discusses some issues which are important from the perspective of ethical and regulatory standards. The author worries that if we create overly sophisticated surrogates for living human brains, “they may themselves deserve some of the kinds of ethical and legal respect that have limited brain research in human beings” (34), tacitly assuming that recognizing their high moral status, similar to these we used to ascribe to humans or at least higher than animals, may block some part of brain research. In my interpretation, the key issue of Greely’s essay—although not expressed explicitly—is how to evaluate scientific evidence and how to act (e.g., how to design a regulatory framework) under a severe uncertainty at many different levels: factual (about the biological mechanisms occurring in surrogates), translational (about conflicting findings that may support divergent policy recommendations), theoretical/philosophical (about the ontological claims on surrogates brains), normative (what is a correct—if any—understanding of their moral status).

The discussed paper does not propose any solution, and except for greeting the recognition of the importance of interdisciplinary approaches to these issues, its main purpose is to alert the reader as to new and tremendously complicated ethical problems. The author asks many questions in a very literal sense: the paper contains more than 50 quotation sentences on many different issues, some of which assume very far-reaching, controversial, and philosophically loaded theses. For example, writing about living models of human brains, he asks about the requisite evidence needed to ascribe a surrogate with full moral status, assuming that there is some threshold of evidence: “If it looks like a human brain and acts like a human brain, at what point do we have to treat it like a human brain – or a human being?” (34). Then, writing about human/nonhuman brain chimeras, he inquires about the level of confidence researchers should gain to be able to equate some biological mechanism occurring in a surrogate in a new context with an analogical mechanism in a standard environment, not distinguishing between a researchers' and regulators' roles in that matter: “how confident can researchers be that, in that alien context, it [a bit of human tissue] is behaving the way it would inside a human brain?” (34).

THE PRECAUTIONARY PRINCIPLE ABOUT MORAL STATUS

I believe that a useful theoretical framework to structure the discussion on this topic is supplied by the debates revolving around the precautionary principle (PP), which states that in situations of some types of uncertainty, a decision-maker should refrain from actions or policies that run the risk of causing harm to the public or to the environment, even if the harmfulness of these actions or policies has not been scientifically established beyond reasonable doubt. The PP has been typically understood in at least three ways: as a decision rule (helping to select among concrete policy options), as an epistemic rule (regulating standards of evidence in case of public decisions) or as a meta-rule (imposing general constraints on how decisions, e.g., about healthcare policy, are made) (Steel Citation2015).

Possible harms mentioned in the standard definitions of the PP assume a situation of empirical uncertainty, i.e., whether a new substance will harm the environment, whether some stimulus will cause the pain of some animal. However, the PP may also be understood as a restrained approach in the cases of theoretical uncertainty when we have to deal with the vagueness of some theoretical concepts, i.e., what counts as harm for an environment?; what counts as an instance of animal pain? This understanding of the PP is visible in the recent debates about animal sentience – some authors have argued that the PP type of reasoning may justify extending the scope of animal protection to some species (e.g., cephalopods) because of the benefit of the doubt approach: in the absence of strong evidence to the contrary, a decision-maker should “take seriously the hypothesis” that these animals may have some biological function, e.g., feel pain, have some form of consciousness (Birch Citation2017).

I believe that the PP approach may be interpreted even more broadly, as the moral precautionary principle (mPP), which also recommends a restrained approach in cases of normative uncertainty that stem from moral or evaluative matters. In this sense, the uncertainty does not concern empirical facts about animal or surrogate suffering, but the normative concept of moral status itself (cf. discussions about early human embryos, Żuradzki, Citation2014). Some of Greely's questions may be interpreted in this sense, e.g., “If, for example, the existence of a human-like self-awareness were ethically relevant, what, if anything, could neuroscience tell us about its existence or absence in a genetically modified monkey, a human/nonhuman brain chimera, a human neural organoid, a chunk of frontal cortex, or a whole ex vivo human brain?” (42). The uncertainty expressed here not only concerns a factual level (“what, if anything, could neuroscience tell us about…”), but also a normative one (“If, for example, the existence of a human-like self-awareness were ethically relevant…”). I believe that this double hedging, both factual and normative, lies behind views claiming that a scientist-advisor or a decision-maker (a member of an animal research ethics committee) should refrain from giving permission for an action that runs the risk of causing harm to a surrogate, even if the very meaning of this “harmfulness” has not been established (cf. Koplin and Wilkinson, Citation2019 writes about the “doubly uncertain” status of human-pig chimeras on 442). This view is expressed straightforwardly by some authors writing about human-animal chimera research, e.g., “Because it would be gravely unethical to harm a chimeric animal with full moral status, we should generally err on the side of overestimating moral status rather than underestimating it” (Koplin and Savulescu Citation2019).

OVER-ATTRIBUTION OF MORAL STATUS

Therefore, I think the following is an accurate way of understanding the surrogate research dilemma: a scientist-advisor or a decision-maker faces two possible options: 1. give the permission needed to conduct the research (e.g., to create or develop sophisticated surrogatee) or 2. reject the request and ban this research (or this type of research).Footnote1 These decisions may produce four possible outcomes (see: ): A. The research is permitted and conducted, but surrogates have, in fact (whatever it may mean), higher moral status (i.e., similar to humans, or at least higher than the species of animals on which such research is permitted); B. The research is accepted and conducted, and surrogates have no higher moral status; C. The research is banned, and surrogates have higher moral status; D. The research is forbidden, but surrogates have no higher moral status (cf. MacAskill, Bykvist, and Ord Citation2020, ch. 8).

Table 1. The surrogate research dilemma

Those who use (explicitly or implicitly) the PP or/and mPP, in this case, would balance the risks of the two types of errors in a specific way: under-attribution, which involves a failure to recognize authentic moral status; and over-attribution, which involves recognizing moral status in its absence. Thus, in my opinion, the main discussion about the permissibility of surrogate research depends on attitudes toward weighing these two types of risks: the opponents of such research (or those advising a cautionary approach) believe that avoiding errors such as in A is much more important than avoiding those in D. One reason for such approach may stem from the assumption that some recommendations by an animal welfare expert in such cases should only count the expected welfare of the nonhumans affected by the policy, but not human welfare in a long-term perspective, i.e., possible social benefits (cf. Birch Citation2018, section 4). However, this is not a self-explanatory view. Its proponents should elaborate on precisely why a decision-maker should prefer option no. 2 in situations where there is even a very slight chance of under-attribution.

Additional information

Funding

This research has received funding from the European Research Council (ERC) under the H2020 European Research Council research and innovation program (grant agreement 805498), and benefited from a research stay at the Fondation Brocher (https://www.brocher.ch/). This article is made open access with funding support from the Jagiellonian University under the Excellence Initiative ? Research University programme (the Priority Research Area Heritage).

Notes

1 For simplicity’s sake, I leave aside a third option: limiting this kind of research in some way.

REFERENCES

  • Birch, J. 2017. Animal sentience and the precautionary principle. Animal Sentience: An Interdisciplinary Journal on Animal Feeling 16 (1):1–15.
  • Birch, J. 2018. Animal cognition and human values. Philosophy of Science 85 (5):1026–37.
  • Greely, H. T. 2021. Human brain surrogates research: The onrushing ethical dilemma. The American Journal of Bioethics 21 (1):34–45. doi:10.1080/15265161.2020.1845853.
  • Koplin, J., and J. Savulescu. 2019. Time to rethink the law on part-human chimeras. Journal of Law and the Biosciences 6 (1):37–50. doi:10.1093/jlb/lsz005.
  • Koplin, J., and D. Wilkinson. 2019. Moral uncertainty and the farming of human-pig chimeras. Journal of Medical Ethics 45 (7):440–6. doi: 10.1136/medethics-2018-105227.
  • MacAskill, W., K. Bykvist, and T. Ord. 2020. Moral uncertainty. Oxford, UK: Oxford University Press.
  • Steel, D. 2015. Philosophy and the precautionary principle. Cambridge, UK: Cambridge University Press.
  • Żuradzki, T. 2014. Moral uncertainty in bioethical argumentation: A new understanding of the pro-life view on early human embryos. Theoretical Medicine and Bioethics 35 (6):441–57. doi: 10.1007/s11017-014-9309-1.