Publication Cover
Accountability in Research
Ethics, Integrity and Policy
Volume 24, 2017 - Issue 1
953
Views
12
CrossRef citations to date
0
Altmetric
Articles

The Role of Intuition in Risk/Benefit Decision-Making in Human Subjects Research

, J.D., Ph.D.
 

ABSTRACT

One of the key principles of ethical research involving human subjects is that the risks of research to should be acceptable in relation to expected benefits. Institutional review board (IRB) members often rely on intuition to make risk/benefit decisions concerning proposed human studies. Some have objected to using intuition to make these decisions because intuition is unreliable and biased and lacks transparency. In this article, I examine the role of intuition in IRB risk/benefit decision-making and argue that there are practical and philosophical limits to our ability to reduce our reliance on intuition in this process. The fact that IRB risk/benefit decision-making involves intuition need not imply that it is hopelessly subjective or biased, however, since there are strategies that IRBs can employ to improve their decisions, such as using empirical data to estimate the probability of potential harms and benefits, developing classification systems to guide the evaluation of harms and benefits, and engaging in moral reasoning concerning the acceptability of risks.

Acknowledgments

I am grateful to Sam Bruton, Jonathan Kimmelman, Joel Pust, Michael Resnik, and David Wendler for helpful comments and discussions.

Funding

This research was supported by the intramural program of the National Institutes of Environmental Health Sciences (NIEHS), National Institutes of Health (NIH). It does not represent the views of the NIEHS, NIH, or U.S. federal government.

Notes

1 I will use the term “principle” rather broadly in this article to include any general rule for conduct or decision-making.

2 “Risk” is typically understood as a product of the probability (or likelihood) and magnitude (or severity) of a harm (Levine, Citation1988; Rid, Emanuel, and Wendler, Citation2010). Thus, risk includes an epistemic component, i.e., probability. Most regulations and guidelines simply refer to “benefits” without placing any epistemic qualifications on the term. However, as Levine (Citation1988) points out, this way of referring to benefits is misleading, since benefits may also occur with some degree of probability. For the sake of consistency and clarity, in this article I will refer to “risks” and “expected benefits.”

3 IRBs are also known as Research Ethics Boards (REBs) or Research Ethics Committees (RECs) outside the United States.

4 Philosophers often characterize intuitions as beliefs, while psychologists describe them as judgments. To accommodate both viewpoints, I will consider intuitions to be beliefs or judgments. See Pust (Citation2012) and Kahneman (Citation2011).

5 Many philosophers and scientists would say that pain is inherently subjective. See Resnik, Rehm, and Minard (Citation2001).

6 The justification relationship could involve deductive, inductive or explanatory connections between beliefs. For example, I might be justified in believing that octagons have more sides than hexagons because this follows from the definitions of these objects (deductive); that John is married because he is wearing a gold ring on his left ring finger, and most people who wear gold rings on that finger are married (inductive); or that my car battery is dead because this belief explains why the radio and starter motor are not working (explanatory).

7 An important problem for coherentists is to specify what is meant by “cohere.” According to some coherentists, coherence consists of internal consistency of beliefs and external validation of beliefs via their practical utility or basis in reality (Sayre-McCord, Citation1996).

8 The foundationalism vs. coherentism issue does not arise for those philosophers, known as non-cognitivists, who deny that we can have moral knowledge (Sayre-McCord, Citation1996). For example, emotivists, such as Ayer (Citation1952) claim that moral discourse expresses emotions but does not express judgments or beliefs. I will assume that we can have moral knowledge, however.

9 A moral principle can be viewed as a moral belief concerning a general rule for conduct.

10 The concept of “reduction” I have in mind here has nothing to do with reduction in the philosophy of science and the philosophy of mind. By “reduce” I mean “use less” or “scale down.” For more on reduction, see Van Riel (Citation2014).

11 By “cogent” I mean “valid” for deductive reasoning or “good” for inductive reasoning.

12 By “empirical” I mean beliefs related to what we observe in the world. Foundationalists argue that intuitive, self-evident beliefs concerning our relationship to the world underlie all of our empirical knowledge, but I will not consider that issue here. See Audi (Citation2010).

13 See footnote 1. Some approaches to risk management include others factors that enter into the evaluation of risk, such as the degree to which an adverse outcome is within one’s control or the uncertainty related to the outcome (Kahneman, Citation2011). Since most discussions of risk in human research ethics focus on the simple formula used here, I will stick to this approach rather than exploring avenues that are beyond the scope of this article.

14 By “value” I mean an aim or goal that is morally worthwhile, such as happiness, health, autonomy, justice, or social welfare. See Rawls (Citation1971) and Nussbaum (Citation2011).

15 Bernabe et al. (Citation2012a, Citation2012b) apply expected utility theory to IRB risk/benefit decision-making. One could argue that expected utility theory is a utilitarian approach to ethical decision-making because it holds that one should choose the action that maximizes overall expected utility, where an expected utility is a product of the probability of an outcome and its utility (e.g., value or worth).

16 The Nazi hypothermia experiments were conducted on non-consenting human subjects (i.e., concentration camp prisoners). The studies exposed human beings to extremely cold temperatures in order to collect data on how the body responds to hypothermia, presumably to develop treatments for this condition. Obviously, lack of consent was a serious moral problem with these experiments (Shamoo and Resnik, Citation2015). But one might hold that such experiments would be morally questionable even if the subjects are consenting volunteers.

17 Workload is a function of how many actions an IRB is expected to review at a particular meeting. Institutions can reduce IRB workload by increasing the number of IRBs. For example, if an institution has only one IRB that typically reviews 10 new protocols, 10 renewals, 6 amendments, and 6 problem reports per month, it could divide this workload in half by creating a new IRB handle half of these actions.

Additional information

Funding

This research was supported by the intramural program of the National Institutes of Environmental Health Sciences (NIEHS), National Institutes of Health (NIH). It does not represent the views of the NIEHS, NIH, or U.S. federal government.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.