ABSTRACT
Social science researchers focus on treatment conditions and participant behaviour when designing causal impact studies and ignore in-depth examination of control conditions, assuming dichotomy even when interventions and their comparisons are better conceived as latent variables distributed along a range of program elements. When there is overlap in the distributions of treatment and control program elements, a weak treatment-control contrast can arise even when participant behaviour is ideal and researchers understand treatment conditions. Failure to recognize and describe potential overlap between program elements in treatment and control conditions increases the probability of a null finding and thwarts research consumers attempting to make evidence-based policy decisions. Treatment differentiation requires a strong understanding of the control experience along the same dimensions used to establish treatment fidelity.
Acknowledgments
The authors would like to thank conference participants at the Southern Economic Association and the Society for Research on Educational Effectiveness.
Disclosure Statement
The authors declare they have no relevant financial interests that relate to the research described in this paper.