390
Views
43
CrossRef citations to date
0
Altmetric
Articles

Progress in the past decade: an examination of the precision of cluster randomized trials funded by the U.S. Institute of Education Sciences

, &
Pages 255-267 | Received 27 Jun 2015, Accepted 14 Dec 2015, Published online: 18 Feb 2016

References

  • Bloom, H. S. 1995. “Minimum Detectable Effects: A Simple Way to Report the Statistical Power of Experimental Designs.” Evaluation Review 19 (5): 547–556. doi: 10.1177/0193841X9501900504
  • Bloom, H. S. 2005. “Randomizing Groups to Evaluate Place-Based Programs.” In Learning More from Social Experiments: Evolving Analytic Approaches, edited by H. S. Bloom, 115–172. New York: Russell Sage Foundation.
  • Bloom, H. S., L. Richburg-Hayes, and A. R. Black. 2007. “Using Covariates to Improve Precision: Empirical Guidance for Studies that Randomize Schools to Measure the Impacts of Educational Interventions.” Educational Evaluation and Policy Analysis 29 (1): 30–59. doi: 10.3102/0162373707299550
  • Boruch, R. F., and E. Foley. 2000. “The Honestly Experimental Society.” In Validity and Social Experiments: Donald Campbell's Legacy, edited by L. Bickman, 193–239. Thousand Oaks, CA: Sage.
  • Brandon, P. R., G. M. Harrison, and B. E. Lawton. 2013. “SAS Code for Calculating Intraclass Correlation Coefficients and Effect Size Benchmarks for Site-Randomized Education Experiments.” American Journal of Evaluation 34 (1): 85–90. doi: 10.1177/1098214012466453
  • Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences. Hillsdale, NJ: Lawrence Erlbaum.
  • Cook, T. D. 2005. “Emergent Principles for the Design, Implementation, and Analysis of Cluster-Based Experiments in Social Science.” The Annals of American Academy of Political and Social Science 599: 176–198. doi: 10.1177/0002716205275738
  • Donner, A., and N. Klar. 2000. Design and Analysis of Cluster Randomization Trials in Health Research. London: Arnold.
  • Hedges, L. V., and E. C. Hedberg. 2007. “Intraclass Correlation Values for Planning Group-Randomized Trials in Education.” Educational Evaluation and Policy Analysis 29 (1): 60–87. doi: 10.3102/0162373707299706
  • Hedges, L. V., and E. C. Hedberg. 2013. “Intraclass Correlations and Covariate Outcome Correlations for Planning Two- and Three-level Cluster-Randomized Experiments in Education.” Evaluation Review 37 (6): 445–489. doi: 10.1177/0193841X14529126
  • Hedges, L. V., and C. Rhoads. 2009. Statistical Power Analysis in Education Research (NCSER 2010–3006). Washington, DC: National Center for Special Education Research, Institute of Education Sciences, US Department of Education.
  • Hill, C., H. S. Bloom, A. Rebeck-Black, and M. Lipsey. 2008. “Empirical Benchmarks for Interpreting Effect Sizes in Research.” Child Development Perspectives 2 (3): 172–177. doi: 10.1111/j.1750-8606.2008.00061.x
  • Institute of Education Sciences. 2015. Request for Applications. Accessed June 10, 2015. http://ies.ed.gov/funding/pdf/2016_84305A.p3df.
  • Jacob, R., P. Zhu, and H. S. Bloom. 2010. “New Empirical Evidence for the Design of Group Randomized Trials in Education.” Journal of Research on Educational Effectiveness 3 (2): 157–198. doi: 10.1080/19345741003592428
  • Liu, X. S. 2014. Statistical Power Analysis for the Social and Behavioral Sciences. New York: Taylor and Francis Group.
  • Murray, D. M. 1998. Design and Analysis of Group Randomized Trials. New York: Oxford University Press.
  • Murray, D. M., and J. L. Blitstein. 2003. “Methods to Reduce the Impact of Intraclass Correlation in Group-Randomized Trials.” Evaluation Review 27 (1):79–103. doi: 10.1177/0193841X02239019
  • Murray, D. M., and B. Short. 1995. “Intra-class Correlation Among Measures Related to Alcohol Use by Young Adults: Estimates, Correlates, and Applications in Intervention Studies.” Journal of Studies on Alcohol 56 (6): 681–694. doi: 10.15288/jsa.1995.56.681
  • Raudenbush, S. W. 1997. “Statistical Analysis and Optimal Design for Cluster Randomized Trials.” Psychological Methods 2 (2): 173–185. doi: 10.1037/1082-989X.2.2.173
  • Raudenbush, S. W., A. Martinez, and J. Spybrook. 2007. “Strategies for Improving Precision in Group-Randomized Experiments.” Educational Evaluation and Policy Analysis 29 (1): 5–29. doi: 10.3102/0162373707299460
  • Raudenbush, S.W., J. Spybrook, R. Congdon, X. Liu, A. Martinez, H. Bloom, and C. Hill. (2011). Optimal Design Software Plus Empirical Evidence (Version 3.0) [Software]. www.wtgrantfoundation.org.
  • Schochet, P. Z. 2008. “Statistical Power for Random Assignment Evaluations of Education Programs.” Journal of Educational and Behavioral Statistics 33 (1): 62–87. doi: 10.3102/1076998607302714
  • Siddiqui, O., D. Hedeker, B. R. Flay, and F. B. Hu. 1996. “Intraclass Correlation Estimates in School Based Smoking Prevention Study: Outcome and Mediating Variables, by Sex and Ethnicity.” American Journal of Epidemiology 144 (4): 425–433. doi: 10.1093/oxfordjournals.aje.a008945
  • Spybrook, J., H. Bloom, R. Congdon, C. Hill, A. Martinez, and S. W. Raudenbush. (2011). Optimal Design Plus Empirical Evidence: Documentation for the “Optimal Design” Software Version 3.0. www.wtgrantfoundation.org.
  • Spybrook, J., A. Puente, and M. Lininger. 2013. “From Planning to Implementation: An Examination of the Changes in the Research Design, Sample Size, and Precision of Group Randomized Trials Launched by the Institute of Education Sciences.” Journal of Research on Educational Effectiveness 6 (4): 396–420. doi: 10.1080/19345747.2013.801544
  • Spybrook, J., and S. W. Raudenbush. 2009. “An Examination of the Precision and Technical Accuracy of the First Wave of Group Randomized Trials Funded by the Institute of Education Sciences.” Educational Evaluation and Policy Analysis 31 (3): 298–318. doi: 10.3102/0162373709339524
  • Ukoumunne, O. C., M. C. Gulliford, S. Chinn, A. C. Sterne, and P. F. Burney. 1999. “Methods for Evaluating Area-Wide and Organisation-Based Interventions in Health and Health Care: A Systematic Review.” Health Technology Assessment 3 (5): 1–99.
  • Westine, C. D., J. Spybrook, and J. Taylor. 2013. “An Empirical Investigation of Variance Design Parameters for Planning Cluster-Randomized Trials of Science Achievement.” Evaluation Review 37 (6): 490–519. doi: 10.1177/0193841X14531584
  • What Works Clearinghouse Procedures and Standards Handbook Version 3.0. 2014. Accessed June 10 2015. http://ies.ed.gov/ncee/wwc/DocumentSum.aspx?sid=19.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.