6,083
Views
26
CrossRef citations to date
0
Altmetric
Methodological Studies

Between-School Variation in Students' Achievement, Motivation, Affect, and Learning Strategies: Results from 81 Countries for Planning Group-Randomized Trials in Education

, , , &

References

  • Bloom, H. S. (1995). Minimum detectable effects: A simple way to report the statistical power of experimental designs. Evaluation Review, 19(5), 547–556.
  • Bloom, H. S. (2006). The core analytics of randomized experiments for social research (MDRC Working Papers on Research Methodology). Retrieved from http://www.mdrc.org/sites/default/files/full_533.pdf
  • Bloom, H. S., Hill, C. J., Black, A. R., & Lipsey, M. W. (2008). Performance trajectories and performance gaps as achievement effect-size benchmarks for educational interventions. Journal of Research on Educational Effectiveness, 1(4), 289–328. doi:10.1080/19345740802400072
  • Bloom, H. S., Richburg-Hayes, L., & Black, A. R. (2005). Using covariates to improve precision. Empirical guidance for studies that randomize schools to measure the impacts of educational interventions. Retrieved from http://www.mdrc.org/sites/default/files/full_598.pdf
  • Bloom, H. S., Richburg-Hayes, L., & Black, A. R. (2007). Using covariates to improve precision for studies that randomize schools to evaluate educational interventions. Educational Evaluation and Policy Analysis, 29(1), 30–59. doi:10.3102/0162373707299550
  • Bloom, H. S., Zhu, P., Jacob, R., Raudenbush, S., Martinez, A., & Lin, F. (2008). Empirical issues in the design of group-randomized studies to measure the effects of interventions for children (MDRC Working Papers on Research Methodology). Retrieved from http://eric.ed.gov/?id=ED502531
  • Boekaerts, M. (1996). Self-regulated learning at the junction of cognition and motivation. European Psychologist, 1(2), 100–112.
  • Bosco, F. A., Aguinis, H., Singh, K., Field, J. G., & Pierce, C. A. (2014). Correlational effect size benchmarks. Journal of Applied Psychology, 100(2), 431–449. doi:10.1037/a0038047
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Lawrence Erlbaum.
  • Cook, T. D., Murphy, R. F., & Hunt, H. D. (2000). Comer's school development program in Chicago: A theory-based evaluation. American Educational Research Journal, 37(2), 535–597. doi:10.3102/00028312037002535
  • Dong, N., & Maynard, R. (2013). PowerUp!: A tool for calculating minimum detectable effect sizes and minimum required sample sizes for experimental and quasi-experimental design studies. Journal of Research on Educational Effectiveness, 6(1), 24–67. doi:10.1080/19345747.2012.673143
  • Durlak, J. A., Weissberg, R. P., Dymnicki, A. B., Taylor, R. D., & Schellinger, K. B. (2011). The impact of enhancing students' social and emotional learning: A meta-analysis of school-based universal interventions. Child Development, 82(1), 405–432. doi:10.1111/j.1467-8624.2010.01564.x
  • Eccles, J. S., & Wigfield, A. (2002). Motivational beliefs, values, and goals. Annual Review of Psychology, 53, 109–132.
  • Gaspard, H., Dicke, A.-L., Flunger, B., Brisson, B. M., Häfner, I., Nagengast, B., & Trautwein, U. (2015). Fostering adolescents' value beliefs for mathematics with a relevance intervention in the classroom. Developmental Psychology, 51(9), 1226–1240. doi:10.1037/dev0000028
  • Gersten, R., Rolfhus, E., Clarke, B., Decker, L. E., Wilkins, C., & Dimino, J. (2015). Intervention for first graders with limited number knowledge: Large-scale replication of a randomized controlled trial. American Educational Research Journal, 52(3), 516–546. doi:10.3102/0002831214565787
  • Hattie, J., Biggs, J., & Purdie, N. (1996). Effects of learning skills interventions on student learning: A meta-analysis. Review of Educational Research, 66(2), 99–136. doi:10.2307/1170605
  • Hedberg, E. C. (2016). Academic and behavioral design parameters for cluster randomized trials in kindergarten: An analysis of the early childhood longitudinal study 2011 kindergarten cohort (ECLS-K 2011). Evaluation Review, 40(4), 279–313. doi:10.1177/0193841X16655657
  • Hedberg, E. C., & Hedges, L. V. (2014). Reference values of within-district intraclass correlations of academic achievement by district characteristics. Results from a meta-analysis of district-specific values. Evaluation Review, 38(6), 546–582. doi:10.1177/0193841X14554212
  • Hedges, L. V., & Hedberg, E. C. (2007). Intraclass correlation values for planning group-randomized trials in education. Educational Evaluation and Policy Analysis, 29(1), 60–87. doi:10.3102/0162373707299706
  • Hedges, L. V., & Hedberg, E. C. (2013). Intraclass correlations and covariate outcome correlations for planning two- and three-level cluster-randomized experiments in education. Evaluation Review, 37(6), 445–489. doi:10.1177/0193841X14529126
  • Hemphill, J. F. (2003). Interpreting the magnitudes of correlation coefficients. American Psychologist, 58(1), 78–80.
  • Hill, C. J., Bloom, H. S., Black, A. R., & Lipsey, M. W. (2008). Empirical benchmarks for interpreting effect sizes in research. Child Development Perspectives, 2(3), 172–177.
  • Institute of Education Sciences, & National Science Foundation. (2013). Common guidelines for education research and development. Retrieved from http://ies.ed.gov/pdf/CommonGuidelines.pdf
  • Ivers, N. M., Taljaard, M., Dixon, S., Bennett, C., McRae, A., Taleban, J., … Donner, A. (2011). Impact of CONSORT extension for cluster randomised trials on quality of reporting and study methodology: Review of random sample of 300 trials, 2000–8. BMJ, 343, d5886. doi:10.1136/bmj.d5886
  • Jacob, R. T., Zhu, P., & Bloom, H. S. (2010). New empirical evidence for the design of group randomized trials in education. Journal of Research on Educational Effectiveness, 3(2), 157–198. doi:10.1080/19345741003592428
  • Kelcey, B., Shen, Z., & Spybrook, J. (2016). Intraclass correlation coefficients for designing cluster-randomized trials in sub-Saharan Africa education. Evaluation Review, 40(6), 500–525. doi:10.1177/0193841X16660246
  • Klieme, E. (2016). TIMSS 2015 and PISA 2015. How are they related on the country level? Retrieved from http://www.dipf.de/de/publikationen/pdf-publikationen/Klieme_TIMSS2015andPISA2015.pdf
  • Konstantopoulos, S. (2008). The power of the test for treatment effects in three-level cluster randomized designs. Journal of Research on Educational Effectiveness, 1(1), 66–88. doi:10.1080/19345740701692522
  • Lazowski, R. A., & Hulleman, C. S. (2016). Motivation interventions in education: A meta-analytic review. Review of Educational Research, 86(2), 602–640. doi:10.3102/0034654315617832
  • LeBreton, J. M., & Senter, J. L. (2008). Answers to 20 questions about interrater reliability and interrater agreement. Organizational Research Methods, 11(4), 815–852. doi:10.1177/1094428106296642
  • Lipsey, M. W., & Cordray, D. S. (2000). Evaluation methods for social intervention. Annual Review of Psychology, 51, 345–375.
  • Lipsey, M. W., Puzio, K., Yun, C., Hebert, M. A., Steinka-Fry, K., Cole, M. W., … Busick, M. D. (2012). Translating the statistical representation of the effects of education interventions into more readily interpretable forms. National Center for Special Education Research. Retrieved from http://eric.ed.gov/?id=ED537446
  • Martin, A. J., Bobis, J., Anderson, J., Way, J., & Vellar, R. (2011). Patterns of multilevel variance in psycho-educational phenomena: Comparing motivation, engagement, climate, teaching, and achievement factors. Zeitschrift Für Pädagogische Psychologie, 25(1), 49–61. doi:10.1024/1010-0652/a000029
  • Murray, D. M., & Blitstein, J. L. (2003). Methods to reduce the impact of intraclass correlation in group-randomized trials. Evaluation Review, 27(1), 79–103. doi:10.1177/0193841X02239019
  • Murray, D. M., Varnell, S. P., & Blitstein, J. L. (2004). Design and analysis of group-randomized trials: A review of recent methodological developments. American Journal of Public Health, 94(3), 423–432. doi:10.2105/AJPH.94.3.423
  • Muthén, L. K., & Muthén, B. O. (2017). Mplus user's guide (8th ed.). Los Angeles, CA: Muthén & Muthén.
  • Olejnik, S., & Algina, J. (2000). Measures of effect size for comparative studies: Applications, interpretations, and limitations. Contemporary Educational Psychology, 25(3), 241–286. doi:10.1006/ceps.2000.1040
  • Organisation for Economic Co-operation and Development [OECD]. (2004). Learning for tomorrow's world: First results from PISA 2003. Paris, France: Author.
  • Organisation for Economic Co-operation and Development (OECD). (2007). Evidence in education. Linking research and policy (2nd ed.). Paris, France: Author.
  • Organisation for Economic Co-operation and Development (OECD). (2009). PISA 2006. Technical report. Paris, France: Author.
  • Organisation for Economic Co-operation and Development (OECD). (2013). PISA 2012 results: What makes schools successful? Resources, policies and practices (Vol. IV). Paris, France: Author.
  • Organisation for Economic Co-operation and Development (OECD). (2014a). PISA 2012 results. Ready to learn. Students' engagement, drive, and self-beliefs (Vol. III). Paris, France: Author.
  • Organisation for Economic Co-operation and Development (OECD). (2014b). PISA 2012 results. What students know and can do. Student performance in mathematics, reading, and science (Vol. I). Paris, France: Author.
  • Organisation for Economic Co-operation and Development (OECD). (2014c). PISA 2012. Technical report. Paris, France: Author.
  • Raudenbush, S. W., Martinez, A., & Spybrook, J. (2007). Strategies for improving precision in group-randomized experiments. Educational Evaluation and Policy Analysis, 29(1), 5–29. doi:10.3102/0162373707299460
  • R Core Team. (2017). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from https://www.R-project.org/
  • Schochet, P. Z. (2008). Statistical power for random assignment evaluations of education programs. Journal of Educational and Behavioral Statistics, 33(1), 62–87. doi:10.3102/1076998607302714
  • Slavin, R. E. (2002). Evidence-based education policies: Transforming educational practice and research. Educational Researcher, 31(7), 15–21. doi:10.3102/0013189X031007015
  • Spybrook, J., Bloom, H. S., Congdon, R., Hill, C., Liu, X., Martinez, A., & Raudenbush, S. W. (2013). Optimal Design Plus Version 3.0 [Computer Software]. Retrieved from http://hlmsoft.net/od/
  • Spybrook, J., & Raudenbush, S. W. (2009). An examination of the precision and technical accuracy of the first wave of group-randomized trials funded by the Institute of Education Sciences. Educational Evaluation and Policy Analysis, 31(3), 298–318. doi:10.3102/0162373709339524
  • Spybrook, J., Shi, R., & Kelcey, B. (2016). Progress in the past decade: An examination of the precision of cluster randomized trials funded by the U.S. Institute of Education Sciences. International Journal of Research & Method in Education, 39(3), 255–267. doi:10.1080/1743727X.2016.1150454
  • Spybrook, J., Westine, C. D., & Taylor, J. A. (2016). Design parameters for impact research in science education. AERA Open, 2(1), 1–15. doi:10.1177/2332858415625975
  • Wang, M. C., Haertel, G. D., & Walberg, H. J. (1993). Toward a knowledge base for school learning. Review of Educational Research, 63(3), 249–294.
  • West, M. R., Kraft, M. A., Finn, A. S., Martin, R. E., Duckworth, A. L., Gabrieli, C. F. O., & Gabrieli, J. D. E. (2015). Promise and paradox measuring students' non-cognitive skills and the impact of schooling. Educational Evaluation and Policy Analysis, 38(1), 148–170. doi:10.3102/0162373715597298
  • Westine, C. D., Spybrook, J., & Taylor, J. A. (2013). An empirical investigation of variance design parameters for planning cluster-randomized trials of science achievement. Evaluation Review, 37(6), 490–519. doi:10.1177/0193841X14531584
  • Wickham, H. (2009). ggplot2: Elegant graphics for data analysis. New York, NY: Springer.
  • Wu, M. (2010). Comparing the similarities and differences of PISA 2003 and TIMSS ( OECD Education Working Papers No. 32). Paris, France: OECD Publishing.
  • Zhu, P., Jacob, R., Bloom, H., & Xu, Z. (2012). Designing and analyzing studies that randomize schools to estimate intervention effects on student academic outcomes without classroom-level information. Educational Evaluation and Policy Analysis, 34(1), 45–68. doi:10.3102/0162373711423786
  • Zopluoglu, C. (2012). Across-national comparison of intra-class correlation coefficient in educational achievement outcomes. Journal of Measurement and Evaluation in Education and Psychology, 3(1), 242–278.