ABSTRACT
Ainsworth et al.’s paper “Sources of Bias in Outcome Assessment in Randomised Controlled Trials: A Case Study” examines alternative accounts for a large difference in effect size between 2 outcomes in the same intervention evaluation. It argues that the probable explanation relates to masking: Only one outcome measure was administered by those aware of participants’ treatment assignment. This paper shows this conclusion is not substantiated by the evidence: The original paper fails to exclude alternative explanations, and what it takes as positive evidence for the preferred explanation is actually negative. While accepting the importance of masking in randomised controlled trials, this paper concludes that the original question was based on a misconception about effect sizes: Seen correctly as a measure of whole study design, the question of effect size difference between different outcome measures does not need asking.
KEYWORDS:
Disclosure statement
No potential conflict of interest was reported by the author.
Notes on contributor
Adrian Simpson is Professor of Mathematics Education at Durham University, where he is also The Principal of Josephine Butler College. His research interests include the school-university transition (particularly in mathematics), proof and reasoning in mathematics, assessment in higher education, and the nature of evidence in educational research and policy making.
ORCID
Adrian Simpson http://orcid.org/0000-0002-3796-5506
Notes
1 Following Howick, this paper uses the less loaded term “masking” instead of the more common “blinding”.