664
Views
7
CrossRef citations to date
0
Altmetric
Articles

The key role of representativeness in evidence-based education

References

  • Banerjee, A., Chassang, S., & Snowberg, E. (2016). Decision theoretic approaches to experiment design and external validity (NBER Working Paper No. 22167). Cambridge, MA: National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w22167
  • Basu, K. (2014). Randomisation, causality and the role of reasoned intuition. Oxford Development Studies, 42(4), 455–472. doi: 10.1080/13600818.2014.961414
  • Bechtel, W., & Abrahamsen, A. (2005). Explanation: A mechanist alternative. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 36(2), 421–441. doi: 10.1016/j.shpsc.2005.03.010
  • Bell, S. H., Olsen, R. B., Orr, L. L., & Stuart, E. A. (2016). Estimates of external validity bias when impact evaluations select sites nonrandomly. Educational Evaluation and Policy Analysis, 38(2), 318–335. doi: 10.3102/0162373715617549
  • Bergeron, P.-J., & Rivard, L. (2017). How to engage in pseudoscience with real data: A criticism of John Hattie’s arguments in Visible Learning from the perspective of a statistician. McGill Journal of Education, 52(1), 237–246. Retrieved from http://mje.mcgill.ca/article/view/9475/7229 doi: 10.7202/1040816ar
  • Biesta, G. (2007). Why “what works” won’t work: Evidence-based practice and the democratic deficit in educational research. Educational Theory, 57(1), 1–22. doi: 10.1111/j.1741-5446.2006.00241.x
  • Bonell, C., Fletcher, A., Morton, M., Lorenc, T., & Moore, L. (2012). Realist randomised controlled trials: A new approach to evaluating complex public health interventions. Social Science & Medicine, 75(12), 2299–2306. doi: 10.1016/j.socscimed.2012.08.032
  • Borman, G. D. (2009). The use of randomized trials to inform education policy. In G. Sykes, B. Schneider, & D. N. Plank (Eds.), Handbook of education policy research (pp. 129–138). New York, NY: Routledge.
  • Cartwright, N., & Hardie, J. (2012). Evidence-based policy: A practical guide to doing it better. Oxford, UK: Oxford University Press.
  • Connolly, P., Biggart, A., Miller, S., O’Hare, L., & Thurston, A. (2017). Using randomised controlled trials in education. Thousand Oaks, CA: SAGE.
  • Cook, T. D., & Payne, M. R. (2002). Objecting to the objections to using random assignment in educational research. In F. Mosteller & R. F. Boruch (Eds.), Evidence matters: Randomized trials in education research (pp. 150–178). Washington, DC: Brookings Institution Press.
  • Cummins, R. (2000). “How does it work” versus “what are the laws”: Two conceptions of psychological explanation. In F. C. Keil & R. A. Wilson (Eds.), Explanation and cognition (pp. 117–145). Cambridge, MA: MIT Press.
  • Davies, P. (1999). What is evidence-based education? British Journal of Educational Studies, 47(2), 108–121. doi: 10.1111/1467-8527.00106
  • Deaton, A., & Cartwright, N. (2018a). Reflections on randomized control trials. Social Science & Medicine, 210, 86–90. doi: 10.1016/j.socscimed.2018.04.046
  • Deaton, A., & Cartwright, N. (2018b). Understanding and misunderstanding randomized controlled trials. Social Science & Medicine, 210, 2–21. doi: 10.1016/j.socscimed.2017.12.005
  • Eisenhart, M. (2009). Generalizing from qualitative inquiry. In K. Ercikan & W.-M. Roth (Eds.), Generalizing from educational research: Beyond qualitative and quantitative polarisation (pp. 51–66). New York, NY: Routledge.
  • Eisenhart, M., & Towne, L. (2003). Contestation and change in national policy on “scientifically based” education research. Educational Researcher, 32(7), 31–38. doi: 10.3102/0013189X032007031
  • Ercikan, K. (2009). Limitations in sample-to-population generalizing. In K. Ercikan & W.-M. Roth (Eds.), Generalizing from educational research: Beyond qualitative and quantitative polarisation (pp. 211–234). New York, NY: Routledge.
  • Ercikan, K., & Roth, W.-M. (Eds.). (2009). Generalizing from educational research: Beyond qualitative and quantitative polarisation. New York, NY: Routledge.
  • Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences, 111(23), 8410–8415. doi: 10.1073/pnas.1319030111
  • Furstenberg, F. F. (2011). The challenges of finding causal links between family educational practices and schooling outcomes. In G. J. Duncan & R. J. Murnane (Eds.), Whither opportunity? Rising inequality, schools, and children’s life chances (pp. 465–482). New York, NY: Russell Sage Foundation.
  • Goldacre, B. (2013). Building evidence into education. London: Department for Education. Retrieved from https://www.gov.uk/government/news/building-evidence-into-education
  • Gorard, S., See, B. H., & Siddiqui, N. (2017). The trials of evidence-based education: The promises, opportunities and problems of trials in education. New York, NY: Routledge.
  • Hammersley, M. (2013). The myth of research-based policy & practice. Los Angeles, CA: SAGE.
  • Hargreaves, D. H. (1997). In defence of research for evidence-based teaching: A rejoinder to Martyn Hammersley. British Educational Research Journal, 23(4), 405–419. doi: 10.1080/0141192970230402
  • Harwell, M. R. (2011). Research design in qualitative/quantitative/mixed methods. In C. F. Conrad & R. C. Serlin (Eds.), The SAGE handbook for research in education: Pursuing ideas as the keystone of exemplary inquiry (2nd ed., pp. 147–164). Los Angeles, CA: SAGE.
  • Hedström, P., & Ylikoski, P. (2010). Causal mechanisms in the social sciences. Annual Review of Sociology, 36, 49–67. doi: 10.1146/annurev.soc.012809.102632
  • Howe, K. R. (2009). Positivist dogmas, rhetoric, and the education science question. Educational Researcher, 38(6), 428–440. doi: 10.3102/0013189X09342003
  • Joyce, K. E., & Cartwright, N. (2018). Meeting our standards for educational justice: Doing our best with the evidence. Theory and Research in Education, 16(1), 3–22. doi: 10.1177/1477878518756565
  • Kennedy, M. M. (1979). Generalizing from single case studies. Evaluation Quarterly, 3(4), 661–678. doi: 10.1177/0193841X7900300409
  • Kern, H. L., Stuart, E. A., Hill, J., & Green, D. P. (2016). Assessing methods for generalizing experimental impact estimates to target populations. Journal of Research on Educational Effectiveness, 9(1), 103–127. doi: 10.1080/19345747.2015.1060282
  • Kvernbekk, T. (2016). Evidence-based practice in education: Functions of evidence and causal presuppositions. New York, NY: Routledge.
  • Mosteller, F., & Boruch, R. F. (Eds.). (2002). Evidence matters: Randomized trials in education research. Washington, DC: Brookings Institution Press.
  • Munro, E., Cartwright, N., Hardie, J., & Montuschi, E. (2017). Improving child safety: Deliberation, judgement and empirical research (CHESS Working Paper 2017-1). Retrieved from https://www.dur.ac.uk/resources/chess/ONLINE_Improvingchildsafety-15_2_17-FINAL.pdf
  • O’Muircheartaigh, C., & Hedges, L. V. (2014). Generalising from unrepresentative experiments: A stratified propensity score approach. Journal of the Royal Statistical Society: Series C (Applied Statistics), 63(2), 195–210. doi: 10.1111/rssc.12037
  • Orr, L. L. (2015). 2014 Rossi Award Lecture: Beyond internal validity. Evaluation Review, 39(2), 167–178. doi: 10.1177/0193841X15573659
  • Pawson, R., & Tilley, N. (1997). Realistic evaluation. London: Sage.
  • Phillips, D. C. (2007). Adding complexity: Philosophical perspectives on the relationship between evidence and policy. In P. A. Moss (Ed.), Evidence and decision making: The 106th yearbook of the National Society for the Study of Education (pp. 374–402). Malden, MA: Blackwell. doi: 10.1111/j.1744-7984.2007.00110.x
  • Raudenbush, S. W., & Bloom, H. S. (2015). Learning about and from a distribution of program impacts using multisite trials. American Journal of Evaluation, 36(4), 475–499. doi: 10.1177/1098214015600515
  • Rubin, D. B. (1990). Formal mode of statistical inference for causal effects. Journal of Statistical Planning and Inference, 25(3), 279–292. doi: 10.1016/0378-3758(90)90077-8
  • Rubin, D. B. (2008). For objective causal inference, design trumps analysis. The Annals of Applied Statistics, 2(3), 808–840. doi: 10.1214/08-AOAS187
  • Seckinelgin, H. (2017). The politics of global AIDS: Institutionalization of solidarity, exclusion of context. New York, NY: Springer.
  • Siddiqui, N., Gorard, S., & See, B. H. (2018). The importance of process evaluation for randomised control trials in education. Educational Research, 60(3), 357–370. doi: 10.1080/00131881.2018.1493349
  • Simpson, A. (2017). The misdirection of public policy: Comparing and combining standardised effect sizes. Journal of Education Policy, 32(4), 450–466. doi: 10.1080/02680939.2017.1280183
  • Slavin, R. E. (2002). Evidence-based education policies: Transforming educational practice and research. Educational Researcher, 31(7), 15–21. doi: 10.3102/0013189X031007015
  • Slavin, R. E. (2008a). Evidence-based reform in education: What will it take? European Educational Research Journal, 7(1), 124–128. doi: 10.2304/eerj.2008.7.1.124
  • Slavin, R. E. (2008b). Perspectives on evidence-based research in education – What works? Issues in synthesizing educational program evaluations. Educational Researcher, 37(1), 5–14. doi: 10.3102/0013189X08314117
  • Stuart, E. A., Bell, S. H., Ebnesajjad, C., Olsen, R. B., & Orr, L. L. (2017). Characteristics of school districts that participate in rigorous national educational evaluations. Journal of Research on Educational Effectiveness, 10(1), 168–206. doi: 10.1080/19345747.2016.1205160
  • Stuart, E. A., Bradshaw, C. P., & Leaf, P. J. (2015). Assessing the generalizability of randomized trial results to target populations. Prevention Science, 16(3), 475–485. doi: 10.1007/s11121-014-0513-z
  • Stuart, E. A., & Rhodes, A. (2017). Generalizing treatment effect estimates from sample to population: A case study in the difficulties of finding sufficient data. Evaluation Review, 41(4), 357–388. doi: 10.1177/0193841X16660663
  • Tipton, E. (2013). Improving generalizations from experiments using propensity score subclassification: Assumptions, properties, and contexts. Journal of Educational and Behavioral Statistics, 38(3), 239–266. doi: 10.3102/1076998612441947
  • Torgerson, C. J., & Torgerson, D. J. (2001). The need for randomised controlled trials in educational research. British Journal of Educational Studies, 49(3), 316–328. doi: 10.1111/1467-8527.t01-1-00178
  • What Works Clearinghouse. (2017a). Procedures handbook 4.0. Retrieved from https://ies.ed.gov/ncee/wwc/Docs/referenceresources/wwc_procedures_handbook_v4.pdf
  • What Works Clearinghouse. (2017b). Standards handbook 4.0. Retrieved from https://ies.ed.gov/ncee/wwc/Docs/referenceresources/wwc_standards_handbook_v4.pdf

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.