Publication Cover
Journal of Medicine and Philosophy
A Forum for Bioethics and Philosophy of Medicine
Volume 32, 2007 - Issue 2
1,418
Views
3
CrossRef citations to date
0
Altmetric
Original Articles

So-Called “Clinical Equipoise” and the Argument from Design

Pages 135-150 | Published online: 12 Apr 2007

Abstract

In this article, I review and expand upon arguments showing that Freedman's so-called “clinical equipoise” criterion cannot serve as an appropriate guide and justification for the moral legitimacy of carrying out randomized clinical trials. At the same time, I try to explain why this approach has been given so much credence despite compelling arguments against it, including the fact that Freedman's original discussion framed the issues in a misleading way, making certain things invisible: Clinical equipoise is conflated with community equipoise, and several versions of each are also conflated. But a misleading impression is given that, rather than distinct criteria being arbitrarily conflated, a puzzle is solved and a number of features unified. Various issues are pushed under the rug, hiding flaws of the “clinical equipoise” approach and thus deceiving us into thinking that we have a solution when we do not. Particularly significant is the ignoring of the crucial distinction between the individual patient decision and the policy decision.

I. INTRODUCTION

Equipoise — the state of uncertainty or lack of grounded preference concerning which of two treatment options is preferable — is often cited as the central criterion for the moral legitimacy of carrying out or continuing a randomized clinical trial (RCT). But despite its wide appeal and acceptance in the form of Freedman's so-called “clinical equipoise,” it cannot serve this function.

In this article, I review and expand upon some arguments against Freedman's so-called “clinical equipoise” and place them in a wider context of discussions of equipoise and the ethics of clinical trials. The goal is not only to clarify why the criterion is unacceptable, but also to explain why it has been given so much credence despite compelling arguments against it.

We perform RCTs to gain reliable knowledge about the safety and efficacy of therapeutic regimens, with the further goal of better health care for future patients. The research protocols involved may impose requirements such as placebos, randomization, and the continuation of the trial to an appropriate level of statistical significance. This poses a tension between the welfare of the human subjects and the attainment of information necessary for the improvement of future medical care. One would like guidance here — a principle that would provide a stopping rule and a moral justification for this — and equipoise has often been appealed to here.

Now, if we understand equipoise in terms of an assessment of what the evidence objectively says, or what some one individual thinks on reflection, and if we conceive of equipoise in a precise way as complete uncertainty, then it is extremely rare or fragile. This won't allow us to carry out a trial to the point where we have the evidence about the safety and efficacy of the treatments that we need to have.

So at present those who endorse equipoise as a substantive criterion are proponents of Freedman's so-called “clinical equipoise” (CitationFreedman, 1987). According to this view, the relevant uncertainty that determines whether admission to a trial creates a moral tension is the uncertainty of the medical community rather than that of the individual practitioner. Freedman asserts that the old conception (what he calls “theoretical equipoise,” but whose explicit criterion denotes “individual equipoise”) can be ignored from the ethical point of view, and that we are justified in carrying out trials, or continuing trials underway, so long as there remains a lack of agreement in the community that one arm of the trial is superior. This state which Freedman calls “clinical equipoise” (but which really should be called “community equipoise”), is said to obtain when there is “present or imminent controversy in the clinical community” (CitationFreedman, 1987, p. 141), and it is disturbed precisely “(a)t the point when the accumulating evidence in favor of B is so strong that the committee of investigators believe no open-minded clinician informed of the interim results would still favor A” (CitationFreedman, 1987, p. 144). I will use “the CE criterion” to refer to this — that is, to the equipoise lost when each of the members of the community has fallen out of “individual equipoise.”

I have been arguing for some time that this so-called “clinical equipoise” solution to this problem is illegitimate (CitationGifford, 1995, Citation2000, Citation2007). I argue that the criterion is importantly ambiguous, but also that there is no single interpretation according to which it gives us clear and reasonable advice that would solve our problem. I also contend that it pushes various issues under the rug, hiding its flaws and thus deceiving us into thinking that we have a solution when we do not.

I find certain aspects of this debate puzzling and frustrating, because I take myself to have established in 1995 (CitationGifford, 1995), that Freedman's criterion is inadequate, and that clearly some alternative justification(s) for carrying out RCTs must be sought and relied upon instead. And if any were unconvinced because they thought I had unfairly substituted community equipoise for clinical equipoise, I explained in (CitationGifford, 2000) why this was not the case. But while I have not seen responses to my arguments, acceptance of the so-called “clinical equipoise” criterion continues. Perhaps, in order to maintain a charitable view of my own writing and others' reading, it would be good to remind ourselves of the fact that this is complicated, messy, slippery terrain, with diverse strong psychological motivations at play.

One complication is that, even if one is focused on the sort of equipoise being urged by Freedman and the defenders of his view (something “in the area of” the CE criterion), there are numerous versions of equipoise – some discretely different and some along a continuum. There are, for example, questions of whether it is clinicians or potential subjects who are to be in equipoise, how membership in a community is determined, and how the fact of community equipoise is constructed out of the individual judgments (is equipoise upset only when there is unanimity amongst the judgers, or need there be only a majority?).

Further, are we to imagine that those applying the criterion are individual clinicians considering enrolling their patients in a trial, or is the idea that it is part of the reasoning of institutional review boards, or data and safety monitoring committees? And if individual clinicians: is the moral question to be viewed as whether it is legitimate to: enroll subjects, allow them to participate, offer participation to them, or recommend participation to them?

We should also distinguish between what might be called “complete” equipoise and “approximate” equipoise: While complete precision may be asking too much, adoption of an approximate view makes it difficult to see how to rein it in or give clear advice.

Finally, what is to count as showing CE to be inadequate? Obviously it's not to be a sufficient criterion; clearly one has an obligation, for example, to obtain informed consent. (cf. CitationEmanuel, Wendler, & Grady, 2000) Still, does one need to show that CE is not even a relevant matter to consider, or just that it is only a prima facie consideration that can be overridden by other factors?

In any case, I contend that the proponents of Freedman's conception of equipoise fail to give a justification for some one clear criterion, but the above ambiguities and complications make it hard to see this clearly. All this ambiguity makes it too easy for people to fail to examine the position carefully enough to see its flaws. It makes it too easy to divert one's attention from one subtype to another rather than finish a given line of reasoning. Relatedly, it makes it too easy for people to think that they are applying it when they really are not.

In what follows, I will describe some of the reasons why I think the proponents have not made their case, and indeed, why the position is just wrong. I will at the same time endeavor to explain why the CE criterion has been given such credence despite compelling arguments against it. To this end, I will discuss a number of important misleading features of Freedman's original discussion which, I think, framed the issues in a confused way and made certain things invisible.

In general, the problem is that the position put forward fails to make certain key distinctions (including concerning the various subtypes of equipoise mentioned above), with the result that the impression is given that certain pairs of concepts or theses are in effect one concept or thesis. This makes it seem like there is a case for the position when there is not. In addition, I will draw out an implicit background framework that I will call the “argument from design.” Finally, I will discuss briefly how this discussion ties in with proposals given by Veatch and by Miller and Brody, who argue that equipoise is simply irrelevant.

II. THE “WE DON'T REALLY KNOW” RESPONSE

Recall again the initial view of equipoise, according to which an individual is completely indifferentFootnote 1 between the two therapies. How can this justify trials that are anything but knife-edge balanced, let alone allow trials to continue to statistical significance?

A tempting response is that we don't really know that one arm of the study is better until we obtain the statistically significant evidence that is the point of the trial. But, of course, at that point, we won't need to go any further. For we would then have all the evidence we need in order to justify taking the action of publishing or submitting to the FDA, etc. Because of what counts as knowledge, there is a perfect match between the amount of data which raises the moral tension, and the amount of data which is required to establish our knowledge base for future medical care.

This strategy — tying knowledge tightly to statistical significance — may portray itself as the scientific (as opposed to unscientific or more intuitive and sloppy) perspective. Those taking this perspective say: If you look at the situation from the properly scientific point of view, you will see that there is no problem.

But of course this is a smokescreen. It pushes under the rug the fact that confirmation or strength of evidence comes in degrees; it assumes, bizarrely, that “knowledge” pops into existence all at once. This forces upon us the consideration that a certain amount of evidence might be sufficient to decide between two treatments where the decision must be made now (as in deciding about a present patient), whereas that same amount of evidence will not be sufficient to make a decision that a trial can be stopped on grounds that we have all the information we need: to publish, submit to the FDA, or change future practice, and to forego further data from that trial. This distinction between the present “individual patient” decision and the “policy” decision is key, yet it is systematically ignored in the discussions of CE.

To lay bare how this reference to “scientific” counts as rhetoric: Note that the portrayal of this position (we don't really “know” until the point of statistical significance) as “scientific” conflates two senses of “scientific.” In the first, scientific just means rational or relying on the best method we know of (whereas “unscientific” means not being so careful, thus leaving ourselves open to being reasonably dismissed). In the second, scientific means in accord with the particular socially sanctioned scientific norms for making decisions relative to the goals of generating generalizable knowledge. This of course builds into the notion of “science” a (broadly utilitarian) standpoint concerned with the progress of science and the provision of information for policy decisions. So this does not constitute a reason for thinking that the individual patient standard, utilizing data not yet at the agreed-upon level of statistical significance, is irrational or unscientific. It's just not the standard that we would need to use in order to maximize the impersonal goals of science. (But we already knew that.) It simply is not the case that one has no reason to treat a patient differently than the trial does until the point where statistical significance occurs; we must not conflate the goals of scientific research and those of subject protection.

III. FREEDMAN'S PAPER

As mentioned earlier, Freedman introduced the term “clinical equipoise” in (CitationFreedman, 1987) and purported to show that the ethically relevant sort of equipoise could indeed be retained long enough to carry out roughly the trials that we want to. The claim was that this could resolve the dilemma for those involved with clinical trials without recourse either to utilitarian trade-offs or to the blind following of criteria of statistical significance. But nor would it hide behind some ad hoc strategy relying on claim that we don't really know until the point of statistical significance. But this, in my view, turns out to be another smokescreen, albeit a more subtle and sophisticated one.

This shift from individual to community equipoise simply does not achieve anything like what it advertises. On the one hand, there is, in effect, a loosening of the standard for how hard to try to do the best for one's patient. And, as discussed above, the ambiguity about what the criterion is makes it harder to see this, and it makes it too easy to think one is applying CE when what is being done is implicitly accepting or utilizing utilitarian trade-offs (perhaps under the guise of the “approximate view”). This is an important point in that Freedman says explicitly that it is a virtue of CE that it does not require such trade-offs.

If one is allowed to apply the criterion in a very approximate way, then perhaps one may as well apply the individual equipoise standard and do that in an approximate way. (Indeed, why not make it approximate enough to include whatever standard of statistical significance?)

And upon close examination, one sees that CE cannot continue to be in force long enough to carry out trials to the point of statistical significance (or any other point that would secure the knowledge we need). We will still need a justification for why we may continue the trial to completion once CE has been disturbed.

So why is it a common view that “so-called clinical” equipoise ameliorates or even solves the ethical dilemma? I believe that an important part of the answer lies in the fact that Freedman's paper is woven together by means of certain very effective but ultimately illegitimate rhetorical devices. It encourages one to assume things that aren't true, and then makes it hard to see through this. It is not my contention that Freedman wove this web intentionally. He simply wove together the general picture at a certain level of grain, and then he and others have failed to ask certain questions or note certain distinctions. Others who are disposed to accept the “solution,” because, for example, they are relieved to find that we can go on and do the trials without the moral tension that has been raised, are even less likely to examine the matter in a more fine-grained manner.

In what follows, I lay out a set of rhetorical linkages made in the Freedman article. In particular, there are several instances where distinct (though not entirely clear) ideas are linked together as a unit in a way that seems commonsensical but in fact is unwarranted, and without emphasizing that this is being done. But this linkage then plays a crucial role in making the CE criterion seem reasonable.

For instance, various divergent ideas are made to seem jointly to add some credibility to the overall (CE) criterion. So even if no one of these seems compelling, together perhaps they could be so. But if this convinces readers, they have been duped; for one thing, these various underlying ideas do not count as a rationale for the same thing.

The first and perhaps most significant of these is the conflation between “clinical” and “community” equipoise (CitationGifford, 2000). There are then also subtypes of clinical equipoise properly so-called (CitationGifford, 2000), and, similarly, there are various importantly different interpretations of “community equipoise” (CitationGifford, 1995).

The final rhetorical linkage I will discuss is what I will call the Argument from Design. This strategy splices together what ethics allows us to do and what is scientifically needed, and it does this in a way analogous to the “knowledge is statistical significance” view described above. As I will show, this yields the false impression that CE remains undisturbed longer than it really can be.

IV. THE CENTRAL CONFLATION: CLINICAL AND COMMUNITY EQUIPOISES

Consider first clinical vs. community equipoise. Freedman's paper discusses two quite distinct conceptions — two distinct shifts from what Freedman takes the previous, inadequate, knife-edge conception to be. There is a proposed shift from individual to community equipoise (tied to the explicit criterion) and a proposed shift from theoretical to clinical equipoise. But the article does not mention that this is happening, or that there exist these two different dimensions that should be thought about separately. The term “community equipoise” is not used; I had to bring that term to the discussion to make sense of what was being proposed. The reader, if he or she notices the distinction at all, is given to believe that there is some one unified view being put forward (and that considerations of one kind count as reasons for matters of another kind). But this is not so.

The real criterion that might plausibly do the work of extending the length of trials concerns community equipoise: we can allow continuation of the trial provided that some (or enough to be deemed a reasonable minority) are not yet convinced. But the term that is used by Freedman and repeated by others is “clinical equipoise.” This has the effect of making readers see them as of a piece when they are not, and it makes it hard to talk about it — hence my cumbersome locution ‘so-called “clinical equipoise’.” Further, and important from a rhetorical point of view, the association of the CE criterion with these “clinical” matters also gives it an air of “clinical legitimacy.” (“Sure, if you conceive of things in some abstract, philosophical way, there seems to be an ethical tension, but if you conceive of things realistically, as the choices really emerge in clinical practice, you see that that was just a mirage.”)

But it is this which is the illusion. Conceiving the problem in clinical rather than “theoretical” (or basic science) terms is an entirely separate matter from that of agreement or disagreement within the community of clinicians. There is nothing inherently clinical about CE, and the “clinical perspective” does nothing to extend the time until equipoise is disturbed — but this fact is hidden.

Amongst the things discussed in Freedman's article which could reasonably be labeled clinical equipoise, the most significant is equipoise about a clinical question, such as, “Is treatment A better, all things considered (including side-effects), than the available alternative, for patients in my practice?” The comparison concept, “theoretical equipoise,” is equipoise about a theoretical question, such as, “Does treatment A cause outcome O with greater frequency and extent than does a placebo, in such-and-such a narrowly defined, homogenous population?” But the move from individual to community equipoise as the standard for assessment, which is the thing that could (under some interpretations) at least keep equipoise undisturbed substantially longer, is completely orthogonal to this matter of the nature of the question being asked. An individual might be in equipoise (or not) about a theoretical question, or he or she might be in equipoise (or not) about a clinical question. And there might be community agreement or not about a theoretical question, and there might be community agreement or not about a clinical question. Nothing about shifting to the community level requires or even suggests the clinical as opposed to the theoretical perspective.

Of course, it is appropriate to conceive of the community as “the clinical community.” For example, CitationVeatch (2002) interprets the criterion's name in this way. This is a reasonable way to make the term make sense, but it is important to see that this does not constitute a discovery of what Freedman really meant that links up community and clinical equipoise in a substantial way. That is, the group of “judgers,” whose individual judgments of equipoise are to be combined to determine if CE exists, is made up of clinicians; but this is not an argument against anything that has been said here. The CE criterion yields a new perspective in that it shifts from the perspective of the individual clinician (who falls out of equipoise almost immediately) to the perspective of the community of those clinicians. Freedman's new insight is community equipoise, period.

To clarify my claim that community and clinical have nothing to do with one another, suppose for the moment that there had been a serious problem that past trials were always designed according to “merely theoretical” questions and criteria. And suppose for the moment that upon some paradigm shift, trials were now done such that when a trial is completed we really know that this particular regimen is the best therapy (taking into account its “net therapeutic index”) for such and such a group of (real) patients, rather than that some narrowly defined regimen is causally relevant to certain easily-measured outcome variables (in a certain homogeneous population). And suppose further this results in better patient care in the future, because we have obtained more relevant, applicable medical knowledge. This yields nothing by way of extending the time that trials are justified as evidence accumulates.

Surely the welfare of the patient/subjects ought to be conceived of in terms of whether the treatment really is best for them “all dimensions considered,” and hence whether equipoise has been disturbed would be conceptualized in these terms. But the proponents of so-called clinical equipoise are presumably also saying (and if they are not, they should be) that the question the trial should be designed to answer is also to be conceived of in terms of finding out what is best for patients “all dimensions considered.” So, while shifting to all-dimensions-considered could be a good thing in terms of the value of trial results, it would have no effect whatsoever on the nature and extent of the moral tension involved in trials — that is, on how much evidence can accumulate before “equipoise” is disturbed. There will still be the same “gap” between having enough evidence to warrant making an individual patient decision and having enough evidence to warrant making a policy decision. It is only the shift from an individual to a community standard — community equipoise — that even has a chance of addressing this.

And yet I submit that most readers of Freedman's article are under the impression that there has been a discovery of a dovetailing of two different goals: making trials more clinically relevant and avoiding giving subjects suboptimal treatment. Indeed, one gets the sense not just that an adequate accommodation has been found, but that a puzzle has been solved. It's as though the fact of this dovetailing shows that these different features can be given a common explanation or story that puts it all together. But this, while appealing, is false.

V. MORE MINOR CONFLATIONS

In addition to this broadest conflation between clinical equipoise and community equipoise, there are also conflations within each of clinical equipoise and community equipoise. And in each case, I believe there is occurring something with the same structure as what occurs above in the community vs. clinical context. Concerning the “within-community equipoise” question, CE appears to have a rationale in evidential warrant in that one should take the views of one's colleagues seriously, and it appears to have a ground in the facts about what would have happened to the subject had they not gone into the trial. But these different rationales buttress two distinct community equipoise concepts. (The evidential warrant underpins a [very fragile] “preponderance of experts” view, and what the “otherwise” rationale most plausibly connects to is some version of the “broad community of dispensing physicians” view.) And once we choose the particular CE concept, one of the rationales falls away.

Similarly on the clinical side, one underlying rationale is the inherent imprecision of the assessment of various graphical representations, and another is the fact of having complex endpoints corresponding to the “net therapeutic index.” But — leaving aside the fact that neither of these map onto and thus lend credence to the CE criterion — these two considerations don't correspond to each other. But I think again one is given the impression of a general principle that is buttressed by and unifies the different sub-considerations, and this makes the view seem more attractive than it really is.

The point is that, despite appearances, Freedman's arguments don't show that some common principle (CE) has multiple rationales, or captures a number of important features. Rather, Freedman's term ‘clinical equipoise’ applies to various distinct concepts that would in fact provide incompatible guidance, and hence between which we must choose. Further, analysis of these specific concepts one at a time shows that none provide a justification or adequate ethical guide for RCTs.

VI. THE ARGUMENT FROM DESIGN

I claim that CE doesn't actually let us continue far enough to get the knowledge that was the point of doing the trial. (And this is why I have been saying that community equipoise, unlike clinical equipoise properly so-called, is the principle that at least has some chance of adequately extending trials.) This no doubt seems very odd; of course this will allow us to go far enough. It's almost defined in such a way as to do exactly that.

But this impression is misleading. Consider the following statement by Freedman of the formal conditions under which a trial would be ethical.

[A]t the start of the trial, there must be a state of clinical equipoise [read “community equipoise”] regarding the merits of the regimens to be tested, and the trial must be designed in such a way as to make it reasonable to expect that, if successfully concluded, clinical equipoise [read “community equipoise”] will be disturbed.” (CitationFreedman, 1987)

London puts this as the requirement that a trial must “begin in and be designed to disturb a state of equipoise.” (CitationLondon, 2007)

Something about the simplicity and symmetry of this recommendation can appear to lend it a certain amount of plausibility. The point of doing the trial is that we at present have disagreement or uncertainty (or, in any case, we don't have agreement that a certain arm is better). (This is also why it is morally acceptable, from the point of view of the subjects, to do the trial.) So surely the goal (and thus all we have to accomplish) is to create that agreement. Thus when the disturbance of community equipoise triggers the situation where it is now morally problematic to continue, it will also signal the attainment of the goal of the trial. This convergence seems especially clear when one is emphasizing the “clinical practice” point of view rather than the scientific knowledge point of view. The goal is to change people's minds and thus change clinical practice. If we aren't going to do that, it doesn't matter that we technically add to scientific knowledge.

This focus on changing clinical practice can seem refreshingly down to earth. There is a suggestion that RCTs have gotten caught up in fastidiousness that makes them only of “theoretical interest,” when they should have been targeted more clinically. And at the same time, in that context, the CE proposal has a certain elegance — the pieces fit together like lock and key, and makes it seem as though one is seeing a puzzle being solved. There appears to be the fortunate discovery that

  1. the patient-centered considerations about the expected benefit of the patient, and hence what's ethically acceptable and

  2. the research-centered considerations concerning what's necessary for a scientifically validated answer, are perfectly linked up. Of course, it is also very convenient — it allows us to do the trials that we were hoping we'd be allowed to do.

Perhaps starting with the assumption that our then-current practice (in 1987) with respect to RCTs was appropriate, and that we just needed to locate the “solution,” clinician-researchers were struck by how Freedman's proposal fit the bill. In any case, something about this clicked with many and it came to be accepted uncritically.

But in fact there is no such perfect fit.

To put the argument briefly: The “community equipoise” criterion says that the evidence is to be “taken seriously” just when all in the community have been convinced. That is, CE is disturbed when the last “judger” has just barely enough evidence to say, “Ok, I'm willing (now, finally) to choose A over B for a given patient where I have to make the choice now.” Starting at this point, it would no longer be ethically acceptable to continue randomization.

Now suppose this particular judge is asked whether we should (whether we are confident enough to) stop the trial, publish the results, and try to get the drug approved. Surely it would be irrational to (immediately, on the same evidence) make this much more momentous decision — where the consequences of acting while being wrong are so dramatically different. And surely many and plausibly most of the other judgers are also still uncertain about whether we have enough evidence to stop the trial given the goals of the trial. Indeed, perhaps all of them are! It depends on the degree of variance in their beliefs. So what reason do we think we have for saying that community equipoise is a criterion that allows a trial to go long enough for us to obtain adequate evidence of the safety and efficacy of our medical treatments? None, I submit.

Indeed, notice the following implication of relying on this “sociological” criterion: Consider a situation where the clinical community as a whole is incorrectly biased in a given direction. A little bit of evidence in that direction might tip them out of community equipoise much, much too soon, depriving us of the check we get from requiring that we have evidence at our predetermined level of statistical significance.

A different thought experiment is this: Consider a situation where all in the clinical community in fact agree on the background facts, methodological rules and values that determine one's equipoise point — the significance of studies already completed or underway, the importance of various side-effects, etc. There is no “spread” in their views about whether to be indifferent to the two arms of the study.

And suppose that they are all at the equipoise (or indifference) point, so each of them is in individual equipoise, and the community is in community equipoise. Here an arbitrarily small amount of evidence in favor of treatment A at the beginning of the trial would tip each of them out of equipoise, and the CE criterion would imply that we have collected all the information we needed for, for instance, approving the drug. But this is surely wrong, and would put in jeopardy our attempts to have secure knowledge with respect to standards of safety and efficacy. The lesson is that the individual patient decision is different from the policy decision, and we need to get evidence that really is reliable, not just convincing to everyone.

It will be objected that this (complete agreement on the background factors) is an implausibly extreme case. But this is just an idealized example to make clear the point — complete agreement would make community equipoise evaporate immediately. In more plausible cases of the sort that would surely arise, there could be a good deal of such agreement about these background factors, and CE would evaporate, not immediately, but much too early. That's still very significant, and it forces the point that a rational clinician really would make a distinction between the amount of evidence needed to tip his or her individual decision and the amount needed to tip the policy decision.

Note that these thought experiments are different from a possible situation where the judgments of experts tend to be skewed towards one arm of the trial based on background knowledge (previous trials with this or similar drugs, theoretical considerations, etc.) which in fact counts as a reason to favor one of the treatments. This would in fact count as a sensible reason for overriding the traditional statistical analysis in terms of p-values — for saying, in effect, that we can stop early for reasons that don't translate into the statistical significance language. From a Bayesian point of view, this would be reflected in the “prior probabilities,” and the ability to account for such background knowledge is often put forth as an argument for Bayesianism. But this is not what is going on in the two cases just described above; these are simply artifacts showing the CE criterion to give clearly poor advice.

So the move from individual to community equipoise does not solve the problem of the gap between the present patient and policy decisions; rather, it covers it up. We can tie this to a point made before about goals: Recall that it was said that the point of doing the trial is that we now have disagreement, and, hence, surely the goal (in the sense of all we have to accomplish) is to create that agreement. And this was tied to the idea that the goal is to (bring about consensus so that you can) change clinical practice. Well, it should be clear that this is too simplistic. In fact the goal is to get safe and effective practice based on reliable information. It's not just to get consensus (so as to change practice), but to get consensus about the right answer, so as to change practice for the better, to have safe and effective treatments. (CitationGifford, 1995, pp. 146–147)

Let me look at the issue in another way: It's true that we wouldn't do the trial if we weren't in equipoise, and hence there can be a tendency to think that once we are out of equipoise, we don't need to continue the trial. But the central reason that we wouldn't begin the trial if we weren't in equipoise was the moral one concerning the treatment of subjects. It is a fallacy to use this as a rationale that once we are out of equipoise, we have reached the goal of having attained enough scientific information.

It is worth mentioning a related point, I think, that CE gets some unwarranted added rhetorical force from its appearing to be simply an established principle of good scientific methodology (to avoid fastidiousness; to be sure to set things up so you'll get an answer). This conflates in the mind the distinction between “responsible science” in the sense of that which follows good methodology, and “responsible science” in the sense of due attention to respect for and protection of research subjects. But it also may tend to imbue the latter with some of the determinateness of the former.

So, there are a number of factors that blind us to it, but in essentially all cases, CE “runs out” long before we get the information we need. Hence to follow the CE criterion in a literal fashion results in stopping without gaining the needed security of clinical knowledge that such a trial can give us, and that was the point of doing the trial. To act as if the CE criterion justifies (something like) our present practice is to fool ourselves. (CE-theorists should be recommending that we stop trials considerably earlier, but this would have a detrimental effect on the security of resultant knowledge.)

The other way to look at this is this: If we in fact continue the trial to statistical significance or to some legitimate point in terms of gaining the appropriate knowledge, and if we tell prospective subjects towards the latter part of a trial that the clinical community is in equipoise, what we are telling them is false. The move from individual to community equipoise does not solve the problem of the gap between the present patient and policy decisions; it covers it up.

VII. THE IRRELEVANCE OF EQUIPOISE

CitationMiller and Brody (2002, Citation2003) and CitationVeatch (2007) each argue that equipoise is simply morally irrelevant. Miller and Brody argue on the basis of various considerations that we should not conceptualize the ethics of research in a way so similar to the ethics of clinical practice, and that what is, on reflection, morally important is not that we treat subjects as well as patients but that we don't exploit them. Hence, we must reconceptualize the framework that we use, and it has been a distraction to focus so much attention on equipoise. Veatch argues that what is relevant is a potential subject's willingness to participate in a trial after having been given adequate information about it, provided there is an absence of undue coercion, manipulation or exploitation. It certainly doesn't matter whether or not some clinician is in equipoise, and it doesn't really even matter if the subject is in equipoise, as he or she may be deciding to participate from altruistic reasons. I am sympathetic to each of these lines of thought. Surely we are all in agreement that something has gone wrong in the widespread acceptance of so-called “clinical equipoise.” And clearly my view that the criterion does not work suggests that we must indeed look for something else; the non-exploitation approach is certainly a possibility. But I still think that the line of thought concerning equipoise and community equipoise needs to be explored.

It is quite clear that (community) equipoise cannot be a sufficient condition for a trial's legitimacy also required are informed consent, fair subject selection, the right to withdraw, etc. (Emanuel, Wendler, & Grady, 2000). And it also seems clear that there can be situations where a prospective subject should be able to consent to a trial that is, at least in medical terms, a “bad deal” trial (CitationJansen, 2005), and hence it should be acceptable to volunteer for a trial that is out of equipoise (of whatever sort). (This could be said to follow from the demise of the community equipoise criterion outlined here, but it should be clear even without this.)

But it is difficult to be sanguine about this line of thought given that focused only on informed consent, given that the therapeutic misconception is such a strong force. (CitationAppelbaum, 1987; Dresser, 2202) If we knew how to resolve the therapeutic misconception problem, it would be a different world. As it is, one is motivated to search for some other ways of providing some guidance. Some version of the equipoise criterion to play this role isn't a completely unreasonable idea to pursue — it just doesn't work.

I take seriously that something like the non-exploitation framework might turn out to be a preferable framework, though it is a bit hard to tell until it is worked out more fully. But I still think that we need to have the discussion about equipoise. In particular, since it is Freedman's move to the community perspective that is at the heart of the popularity of the view, we really need to critique that particular move more carefully.

Further, I think that it needs to be taken seriously that if the advocates of Freedman's position were right about the so-called “clinical equipoise” criterion (for example, if the moral rationale in relation to obligations to present subjects made the shift from individual to community equipoise ethically acceptable, and if following this criterion really led to one being able to get to some semblance of statistical significance (or some respectable policy-decision level of confidence), and if modifications could be made in the equipoise position to deal with certain problems), then the case for rejecting the framework entirely and moving to a non-exploitation conception, would not be nearly as strong. This is especially so when the “non-exploitation” framework remains rather vague. Unless the critique is secure, the claim that it's irrelevant will itself be insecure.

Finally, I think that, even if we move to, for example, the non-exploitation framework, it will remain important to think about these issues concerning equipoise — and concerning reliance on community judgment. These matters would likely crop up again as part of the reasoning behind whether the risk is small enough or whether exploitation has taken place. (An example of this comes up in Jansen's article (2005), which suggests that the best way to ensure non-exploitation might well end up being to require equipoise.)

Notes

1. The term “indifferent” that CitationVeatch (2002) uses is in fact better than “uncertain” in a couple of important ways — it better fits with the idea that the assessments require value judgments as well, and it doesn't so easily suggest a continuum, as in the example where one retains some uncertainty even once statistical significance has been reached. Unfortunately, its use in a phrase like “the indifference of the clinical community” might leave an unintended impression.

REFERENCES

  • Appelbaum , P. , Roth , L. and Lidz , C. 1987 . False hopes and best data: Consent to research and the therapeutic misconception . Hastings Center Report , 17 : 20 – 24 .
  • Dresser , R. 2002 . The ubiquity and utility of the therapeutic misconception . Social Philosophy and Policy , 19 : 271 – 294 .
  • Emanuel , E. , Wendler , D. and Grady , C. 2000 . What makes clinical research ethical? . JAMA , 283 : 2701 – 2711 .
  • Freedman , B. 1987 . Equipoise and the ethics of clinical research . New England Journal of Medicine , 317 : 141 – 145 .
  • Gifford , F. 1986 . The conflict between randomized clinical trials and the therapeutic obligation . Journal of Medicine and Philosophy , 11 : 347 – 366 .
  • Gifford , F. 1995 . Community equipoise and the ethics of randomized clinical trials . Bioethics , 9 : 127 – 148 .
  • Gifford , F. 2000 . Freedman's “clinical equipoise” and “sliding-scale all-dimensions considered” equipoise . Journal of Medicine and Philosophy , 25 : 399 – 426 .
  • Gifford , F. 2007 . “ Taking equipoise seriously: The failure of clinical or community equipoise to resolve the ethical dilemmas in randomized clinical trials ” . In Establishing Medical Reality: Essays in the Metaphysics and Epistemology of Biomedical Science , Edited by: Kincaid , H. and McKitrick , J. New York : Springer .
  • Jansen , L. 2005 . A closer look at the bad deal trial: Beyond clinical equipoise . Hastings Center Report , 35 ( 5 ) : 29 – 36 .
  • London , A. forthcoming . “ Clinical equipoise: Foundational requirement or fundamental error? ” . In The Oxford Handbook of Bioethics , Edited by: Steinbock , B. New York : Oxford University Press .
  • Miller , F. and Brody , H. 2002 . What makes placebo-controlled trials unethical? . American Journal of Bioethics , 2 ( 2 ) : 3 – 9 .
  • Miller , F. and Brody , H. 2003 . A critique of clinical equipoise: Therapeutic misconception in the ethics of clinical trials . Hastings Center Report , 33 ( 3 ) : 19 – 28 .
  • Miller , P. and Weijer , C. 2003 . Rehabilitating equipoise . Kennedy Institute of Ethics Journal , 13 : 93 – 118 .
  • Veatch , R. 2002 . Indifference of subjects: An alternative to equipoise in randomized clinical trials . Social Philosophy and Policy , 19 ( 2 ) : 295 – 323 .
  • Veatch , R. 2007 . The irrelevance of equipoise . Journal of Medicine and Philosophy , 32 ( 2 ) : 167 – 183 .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.