ABSTRACT
What role does scientific evidence play in educational practice? Supporters of evidence-based education (EBE) see it as a powerful way of improving the quality of public services which is readily applicable to the education sector. Academic scholarship, however, points out important limits to this applicability. I offer an account inspired by Tullock’s theory of bureaucracy that helps explain EBE’s influence despite these limits. Recent configurations of EBE are an imperfect solution to 2 imperatives where policymakers are at an informational disadvantage: (a) guiding professionals working in the field and (b) evaluating evidence from academic researchers. EBE, especially in the form of RCTs and systematic reviews, offers a way of filtering a complex range of research to produce a determinate result that is transparent to policymakers. However, this impression of research transparency is misleading as it omits theoretical background that is critical for successfully interpreting the results of particular interventions. This comes at a cost of relevance to the frontline professionals whom this research evidence is supposed to inform and help.
Acknowledgements
Many thanks are due to David S. Lucas, Nancy Cartwright, and the two helpful anonymous reviewers.
Disclosure statement
No potential conflict of interest was reported by the author.
ORCID
Nick Cowen http://orcid.org/0000-0001-7039-8999
Notes
1. This is assuming the experiment is conducted in a way that does not introduce confounding variables, which itself is far from trivial. Unless a treatment is triple blind, placebo controlled, and based on an intention-to-treat analysis, which is often very hard to build into a social policy intervention, it is almost inevitable that some confounding factors will have been introduced. Deciding that it is safe to ignore such factors itself requires theoretical assumptions. In addition, an unbiased estimate can still turn out to be inaccurate, especially if it relies on a small sample. Under ordinary tests of statistical significance, we expect 1 in 20 unbiased estimates to reject the null hypothesis erroneously, and that these errors can survive in the literature if such results are subject to publication bias even if all individual studies are unbiased.