826
Views
0
CrossRef citations to date
0
Altmetric
Editorial

Interventions in different disciplines: a comparison of clinical drug trials and economic experiments

Abstract

A neglected area in studying experimental interventions is to compare different disciplines with each other. This editorial performs an interdisciplinary analysis as it contrasts interventions in clinical drug trials with interventions in economic experiments. There are not only methodological similarities (method of difference and randomization) but also differences (blinding). The large number of similar challenges and problems in implementation and analysis (e.g., self-selection, the volunteer subject, and external validity) offers the potential for cross-fertilization of the two disciplines whether their language barriers can be overcome.

During the mid-20th century, economics was widely considered to be a nonexperimental science Citation[1], whereas a standardized experimental procedure involving human subjects had already been established in medicine. Despite the asymmetric historical development of experimental procedures in the two disciplines, clinical drug trials and economic experiments have research interests that are quite similar to each other. Both are concerned with a reliable assessment of the effectiveness of an intervention. In medicine, the effectiveness of pharmaceutical drugs and the efficacy of new therapies are of great importance. In economics, issues that address the impact of institutions and policies on human decision-making behavior are relevant.

In the literature, many economic experiments have been compared with psychological experiments. There are debates, for example, about whether or not to use monetary incentives Citation[2–4]. In contrast, virtually no attempts have been made to systematically compare the experimental procedures in medicine and economics. Language barriers may prevent an interdisciplinary exchange of ideas. Alternatively, the overall differences between medicine and economics may outweigh the similarity of the research goal.

Against this background, I examine two important questions that explore the potential for an interdisciplinary fertilization of medicine and economics. First, are there parallels in the methodical approach between clinical drug trials and economic experiments? Second, do similar problems and challenges between the two disciplines exist?

Principles of randomized controlled trials

A primary goal of an experiment is to elicit cause–effect relationships by group comparisons. The basic procedure for evaluating the effectiveness of interventions is to conduct randomized controlled studies. In the following paragraphs, I will briefly discuss three principles that contribute to the experimental identification of causal relations: the method of difference, randomization and blinding. Subsequently, I examine the relevance of each of the principles in clinical drug trials and economic experiments.

The method of difference, proposed by the philosopher Mill Citation[5], is a principle of experimental research that aims at investigating the effects of a given cause. The basic idea is to eliminate common elements and circumstances in the style of a system of equations in which only the remaining elements are relevant for the outcome. For this purpose, two cases are necessary that are identical, or at least very similar, with the exception of the one element of interest. In reality, however, two cases rarely are sufficiently similar to each other due to complexity and incomplete information. Since spontaneous experiments generally do not meet the criterion of similarity, experimenters enter the laboratory and actively generate similarity in two cases. In other words, they conduct an artificial experiment. In artificial experiments, two cases are compared with each other, while the object of interest occurs in one, but not in the other case. The comparison of the two cases allows conclusions to be drawn on the impact of the subject of investigation. The method of difference, therefore, allows for active-comparator control.

The statistician Fisher Citation[6] emphasizes the importance of having recourse to randomization. Randomization means that the experimental subjects are randomly (i.e., with equal probability) assigned to one of the groups in the experiment. The purpose of randomization is to create comparable groups before the study starts with respect to relevant factors (e.g., age, gender, and culture). A factor is relevant if it can influence the outcome of a study. Some factors are known to the experimenter, while other factors are unknown. By randomly assigning the participants to the respective groups, unknown factors remain unknown. However, because they are equally distributed, randomization helps to process the unknown information. Active assignment of the individuals would run the risk of conscious or unconscious influences on group assignment by the experimenter (selection bias). This could lead to systematic differences in factors between the groups and, therefore, may bias the outcome of the experiment.

Blinding (masking), a concept that goes back to the end of the 18th century Citation[7], describes the extent of knowledge about the participant’s group allocation (single blinding, double blinding). It is used when the knowledge of group membership may have an unintended influence on the outcome of the study. Single blinding is referred to as the study participants (e.g., patients) not being informed of their group membership. In double blinding, not only the study participants are unaware of their group assignment but also the study leaders remain uninformed.

Design & procedure of clinical drug trials & economic experiments

Clinical drug trials

To assess the effectiveness of a new drug, the experimenter has to account for several effects, which, if not considered, may misleadingly cause ineffective clinical drug trials appear to be effective. Confounded and placebo effects are the most critical ones.

Confounded effects include effects due to the natural course of the disease, statistical effects (regression to the mean), habituation effects and methodological errors Citation[8]. Regardless of the intervention, patient symptoms possibly decrease over time on their own (natural course of the disease). Regression to the mean is a statistical phenomenon, which says that values in the average range are more likely to appear rather than higher values (e.g., blood pressure; Citation[9]). Habituation effects (time effects) describe the increasing familiarity of the participants with the situation as participant in a clinical trial over time. A type of methodological error would be any possible interactions with other medications that are taken in addition to the actual intervention. Therefore, possible medication interactions would have to be considered.

Placebo effects are of great importance for clinical drug trials Citation[10]. A placebo is a sham drug (i.e., substance without pharmacological activity, according to the current state of scientific knowledge), which is similar to the active substance in appearance (color, size), flavor, weight and, to a certain extent, side effects.Footnote The patient’s expectations regarding the effectiveness of a drug, which depend on factors including the price of the drug and the doctor-patient relationships, determine the placebo effect. Positive expectations regarding the health outcomes due to drugs have, in the case of placebo effects, positive influences on the course of the health outcomes. Placebo effects differ in occurrence and magnitude from patient to patient due to varying expectations. However, placebo effects have many ambiguities. These include the probability of the incidence, size and duration Citation[11]. Although placebo effects are quite welcome in medical treatments, they make the analysis of the effectiveness of interventions more difficult. The major concern analyzing the effectiveness of a new drug is that confounded effects, placebo effects and the influence of the actual intervention overlap. Therefore, the experimenter cannot assess the effectiveness of a new drug simply by comparing the health of patients before and after the treatment.

In clinical drug trials, generally two groups are compared with each other. In one of the groups, patients receive the placebo preparation (control group), and in the other, patients receive the drug being tested, which contains the pharmacological substance (experimental group). The patients are randomly assigned to one of the two groups. This indeed cannot prevent various potential distortions (e.g., regression to the mean); however, they are divided equally between the groups. The central question is whether the active substance (drug/vaccine) is significantly more effective than the inactive placebo substance. If, however, no significant difference from the placebo can be found, it does not mean that a person does not receive any positive effect from it.

Clinical drug trials usually apply double blinding. This means that neither the trial-conducting doctor nor the patient knows throughout the entire course of the study whether the patient is in the control or the experimental group. Knowledge of the nature of the intervention by the patient could distort the results of the study since the physician and patient interact with each other during the clinical drug trial. It may further influence the patient’s fears, expectations and their decision to eventually leave the clinical study. The participants in the placebo group could increasingly terminate their participation in the study or target conventional treatments (negative leave study, positive remain). In other words, the control and the experimental group may be systematically different due to dropouts. Knowledge about the group assignment by the study-conducting doctor may also influence the outcome of the study. The assessment of the effectiveness of the intervention would be complicated if the doctor no longer performs treatment regardless of group assignment (e.g., influence on the commitment of the doctor). Negative sentiments or attitudes can also be transmitted through nonverbal communication to the patient and, in turn, influence the continuance of the patients in the study. Furthermore, measurements are subjective. If the doctor believes in a treatment, it may affect his interpretation and evaluation (observer effect; Citation[12]). The objective of double blinding in clinical drug trials is, therefore, to produce observational equivalence.

Economic experiments

Participants in an economic experiment are heterogeneous. In particular, they differ in their information processing abilities (bounded rationality in the tradition of the late Herbert Simon). Furthermore, the experimenter influences, by the chosen design of the experiment, self-selection of the participants and thus the composition of the participants.

Experimental subjects differ in the extent of their bounded rationality. With this term, the late Simon Citation[13] referred to the cognitive limits of individuals. They typically choose satisfactory courses of action and do not intend to optimize. If one option is considered as satisfactory, the individual may not examine other options. Thus, decisions of individuals are not necessarily consistent. In particular, the temporal sequence in which the alternatives are presented to the individual is important Citation[14]. Individuals often resort to heuristics (i.e., simple decision rules) to overcome their computational limits Citation[15,16].

The experimenter influences the outcome of an experiment by determining the design of an economic experiment (e.g., by choosing a certain level of monetary incentives). The willingness to participate in economic experiments ceteris paribus increases with higher monetary incentives. The amount of monetary incentives also influences the decision-making behavior of the individuals. For example, Holt and Laury Citation[17] found in their lotteries that assess the individual risk attitude, an increase in the individual risk aversion due to providing higher monetary stakes.

Furthermore, the medium in which the experiments are conducted may affect the experiment’s outcome and seemingly the decision-making behavior. While economic experiments are traditionally conducted in a laboratory environment, interest in extra-laboratory experiments has grown in recent years. These economic experiments are in the spirit of traditional laboratory experiments, but differ in some essential characteristics. These include, but are not restricted to, recruiting nonstudent experimental subjects and changing the location of the experiment from the traditional laboratory environment to the World Wide Web Citation[18]. Internet-based experiments have their pros and cons. On the one hand, it enables experimental subjects to participate independently of space restrictions in their familiar environments. Moreover, they do not necessarily have to meet at one specific time in the laboratory. On the other hand, internal validity is lower in comparison with laboratory experiments. We cannot ensure full control over individuals in Internet-based experiments. For example, with experiments that do not occur in the laboratory, it remains unclear whether technical assistance is used or other people have influenced the experimental subjects. One might argue that this is a rather serious problem. However, this is not necessarily true. This lack of control may be a tool that brings the marginal costs and benefits of one’s effort into balance. An even greater source of concern is the issue of the volunteer subject Citation[19]. A structural deviation between experimental and nonexperimental subjects may limit the external validity of the study. Because no costs are linked to overcoming the space restrictions in Internet-based experiments (e.g., time, ticket to arrive at the laboratory), the opportunity costs are ceteris paribus higher in laboratory experiments than in internet-based experiments. The number of potential experimental subjects for a given overall degree of informed individuals may be higher by running internet-based experiments instead of laboratory experiments because there are more individuals having an expected utility at least as high as their reservation utility in Internet-based experiments. The decision to run a laboratory or Internet-based experiment does not only affect the quantitative but may also influence the qualitative composition of the sample. Internet-based experiments may serve as an instrument to overcome spatial limits and recruit representatives of the social group of interest, and even experimental subjects from different nations.

To isolate the influence of institutional innovations in economic experiments, two groups are generally compared with each other. One group is without policy measures (control group), and the other group has policy measures (experimental group). The participants are randomly assigned to one of the groups. This ensures that the experimental subjects are distributed homogeneously between the groups, despite their heterogeneity within a group. Achieving double blinding is typically not applied in economic experiments. The participants know their policy measure, which is precisely the aim of the analysis. Hoffman et al. Citation[20] argue that the goal of economic experiments is to not achieve double blinding but rather to create irrelevance over subject-message identity.

Comparison

The main similarities between clinical drug trials and economic experiments are the method of difference and randomization. The difference between the two disciplines is blinding. In contrast to clinical drug trials, economic experiments usually do not have blinding.

Clinical drug trials allow probability statements (i.e., statistical averages) about the effectiveness of a drug. Because economic experiments have no blinding, the outcomes of an intervention can be clearly assigned to an individual. However, there are also limits in economic experiments. The design of an economic experiment generally is similar to a model – an abstraction of reality, and therefore incomplete. A single experiment cannot capture all of the variables with a potential impact. In addition, the experimenter makes various assumptions in advance (e.g., the amount of monetary incentives, carrying out the experiment in the laboratory or Internet based), which have to be considered while interpreting the individuals’ behavior. The assumptions made in an economic experiment vary with their level of occurrence in real life. In contrary to statements on the behavioral direction of action due to interventions, it often does not make sense to interpret levels of behavior in economic experiments.

Are there similar problems & challenges?

Besides the relevance of self-selection and influence by the experimenter, the duration of clinical drug trials and economic experiments probably affects their outcomes. Hard endpoints are often used in clinical drug trials. However, it remains an open question whether the quality of life and side effects can be adequately met by this procedure. A limited and often relatively short duration neglects potential long-term effects. Similar problems arise in economic experiments. The participants in multiperiod experiments may change their behaviors over time (e.g., by learning). However, a long trial period can lead participants to lose their motivation. This raises the question of which time period should be used for observing the effectiveness of an intervention.

Randomized controlled trials experience a conflict of objectives between internal and external validity Citation[21]. A high control of the experimental environment leads to loss of external validity due to its artificial nature. In clinical drug trials, the representativeness of the results and the generalizability of the studies on everyday life is limited. The average patient does not match a patient in general practice. Very young and very old patients, pregnant women and multimorbid patients are under-represented in clinical trials. In economic experiments, flexibility is somewhat greater in terms of the design of an experiment. Laboratory experiments have a high internal but low external validity. Field experiments help to overcome the low external validity problem by observing individuals in their natural environment. Particularly impressive are the studies by Duflo and Banerjee Citation[22], which tried to experimentally test strategies of development aid in developing countries with the help of field experiments. Duflo and Banerjee Citation[22] studied different groups of households and villages, which were randomly confronted with development aid actions. However, several problems arose, such as the observation and review over the entire term (resulting in high costs), as well as ethical problems (denial of assistance to part of the population). This compares similarly with ethical concerns arising in clinical drug trials because only a portion of the participants receives an actual drug.

Concluding remarks

Despite their historically asymmetric development, clinical drug trials and economic experiments do have similarities in their respective research goals (influence of interventions), design and procedure, as well as problems and challenges . This article indicates that there is an interesting and promising research field for interdisciplinary communication. However, this study is only a first step. Future work should address the similar problems and challenges (e.g., the gap between the internal and the external validity) in more detail to learn from each other.

Table 1. Similarities and differences of clinical drug trials and economic experiments.

Results for a single experiment are generally not sufficient to provide robust conclusions. However, the comparison of various experiments (e.g., in the form of metastudies) also provides challenges. In clinical drug trials, a therapy X with respect to the corresponding placebo group may have no significant effect. Nevertheless, the alternative therapy Y compared with the corresponding placebo group may have a statistically significant influence. One could argue that intervention Y compared with intervention X would be preferable. This conclusion, however, is premature. Although the relative success of intervention Y with respect to X is greater, this does not say anything about the absolute success of the interventions. The absolute success by intervention X can be greater than the absolute success of treatment Y if the placebo effect is significantly larger with intervention X than with Y. Walach Citation[23] refers to this effect as the efficacy paradox. In economic experiments, parallels are conceivable. Institutional innovations can develop a different effectiveness, depending on the experiment environment (Internet, laboratory), because this may mean different opportunity costs due to systematic differences between experiments (e.g., the motives of the participants). At the end, it must be assessed whether actual comparison between experiments exists and how to possibly ascribe these differences.

Acknowledgements

I would like to thank anonymous reviewers for their comments, ideas and criticism. In addition, I am very grateful for the financial support provided by ScienceCampus Halle (WCH).

Financial & competing interests disclosure

The author has no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties.

Notes

The side effect profile differs between placebos and the active drug due to ethical reasons, which would arise by providing a substance without benefit. Moreover, inflicting side effects on control patients may also undermine assessing the safety profile of a drug.

References

  • Friedman M. The Methodology of positive Economics. Friedman M. editors. Essays in Positive Economics. University of Chicago Press, Chicago; 1953. 3-43
  • Kahneman D, Tversky A. Prospect theory:An analysis of decision under risk. Econometrica 1979;47(2):263-92
  • Smith VL, Walker JM. Monetary rewards and decision cost in experimental economics. Econ Inq 1993;31(2):245-61
  • Gneezy U, Rustichini A. Pay enough or don’t pay at all. Q J Econ 2000;115(3):791-810
  • Mill JS. A system of logic, ratiocinative and inductive. Parker W. West Strand; London: 1843
  • Fisher RA. The Design of Experiments. Oliver & Boyd; Edinburgh: 1935
  • Franklin B, Bailly JS, Lavoisier A. Rapport des commissaires chargés par le roi, de l´examen du magnetisme animal. Nice: Chez Gabriel Floteron; 1785
  • Kienle KS, Kiene H. The powerful placebo effect. J Clin Epidemiol 1997;50(12):1311-18
  • Morton V, Torgerson DJ. Effect of regression to the mean on decision making in health care. BMJ 2003;326(7398):1083-4
  • Beecher HK. The powerful placebo. JAMA 1955;159(17):1602-6
  • Benedetti F. Placebo effects. Understanding the mechanisms in health and disease. Oxford University Press; New York: 2009
  • Schulz KF, Grimes DA. Blinding in randomised trials: hiding who got what. Lancet 2002;359:696-700
  • Simon HA. Invariants of human behavior. Annu Rev Psychol 1990;41:1-19
  • Simon HA. Reason in human affairs. Stanford University Press; Stanford: 1983
  • Goldstein DG, Gigerenzer G. Models of ecological rationality: The recognition heuristic. Psychol Rev 2002;109(1):75-90
  • Selten R, Abbink K, Cox R. Learning direction theory and the winner’s curse. Exp Econ 2005;8:5-20
  • Holt CA, Laury SK. Risk aversion and incentive effects. Am Econ Rev 2002;92(5):1644-55
  • Charness G, Gneezy U, Kuhn MA. Experimental methods: Extra-laboratory experiments-extending the reach of experimental economic. J Economic Behavior & Org 2013;91:93-100
  • Rosenthal R, Rosnow RL. The Volunteer Subject. In: Rosenthal R and Rosnow RL (editors). Artifacts in Behavioral Research, Robert Rosenthal and Ralph L. Rosnow’s Classic Books. Oxford University Press, Oxford, UK; 2009. 48-92
  • Hoffman E, McCabe K, Smith VL. Social distance and other-regarding Behavior in dictator games. Am Econ Rev 1996;86(3):653-60
  • Rothwell PM. External validity of randomised controlled trials: “To whom do the results of this trial apply?”. Lancet 2005;365(9453):82-93
  • Duflo E, Banerjee AV. Poor Economics: A radical rethinking of the way to fight global poverty. PublicAffairs, Philadelphia: 2011
  • Walach H. Das Wirksamkeitsparadox in der Komplementärmedizin. Forschende Komplementärmedizin und Klassische Naturheilkunde 2001;8:193-5

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.