Abstract
Often, when an evaluation is needed, there is always a tendency to jump into data collection without first considering what might be already available. This paper demonstrates how an existing student database from a large school district could be used to provide some rigorous evidence regarding the impact of a supplementary teacher professional development programme on student achievement. Randomization was not possible in this study but to ensure rigour, matching of the treatment and the comparison groups was done on variables that were considered highly predictive of future achievement. The two groups were matched at the class level on course type, grade level and prior achievement. Next, hierarchical linear modelling was used to further control for selection bias between the two groups. According to the statistical theory on selection bias, this two‐part process of matching and modelling removes all the observed bias. This illustration shows that performing a rigorous quasi‐experimental study is possible in education even when there are limiting circumstances to conducting a randomized experiment.
Acknowledgements
The evaluation described in this paper was conducted under contract with The College Board. Statements made in this paper do not reflect the position of The College Board. The author would like to thank Mike Garet and Kerstin Carlson Lefloch from the American Institutes for Research for their valuable guidance and support in conducting the study and writing the manuscript. The author would also like to thank two anonymous reviewers from The College Board who gave valuable feedback for previous versions of the report and manuscript.
Notes
1. Barnow, Cain, and Golderger (Citation1980) defined ‘bias’ as the potential misestimate of the effect of the treatment on the outcome. Selectivity bias becomes an issue whenever assignment to treatment and comparison groups is not random but is dependent on observable explanatory variables, assuming all explainable variables are observable.