References
- Al Otaiba, S., Connor, C. M., Kosanovich, M., Schatschneider, C., Dyrlund, A. K., & Lane, H. (2008). Reading First kindergarten classroom instruction and students' phonological awareness and decoding fluency growth. Journal of School Psychology, 48, 281–314.
- Angrist, J., Imbens, G. W., & Rubin, D. B. (1996). Identification of causal effects using instrumental variables. Journal of the American Statistical Association, 91, 444–472.
- Angrist, J. D., & Pischke, J-F. (2009). Mostly harmless econometrics: An empiricist's companion. Princeton, NJ: Princeton University Press.
- Blanc, S., Christman, J. B., Liu, R., Mitchell, C., Travers, E., Bulkley, K. E., & Lawrence, N. R. (2010). Learning to learn from data: Benchmarks and instructional communities. Peabody Journal of Education, 85, 205–225.
- Booher-Jennings, J. (2005). Below the bubble: “Educational triage” and the Texas accountability system. American Educational Research Journal, 42(2), 231–268.
- Boruch, R., Weisburd, D., & Berk, R. (2010). Place randomized trials. In A. Piquero, & D. Weisburd (Eds.), Handbook of quantitative criminology (pp 481–502). New York, NY: Springer.
- Bracey, G. W. (2005). No Child Left Behind: Where does the money go? ( EPSL-0506-114-EPRU). Tempe: Education Policy Studies Laboratory, Arizona State University.
- Bryk, A., Gomez, L., & Grunow, A. (2011). Getting ideas into action: Building networked improvement communities in education. In M. Hallinan (Ed.), Frontiers in sociology of education (pp. 127–162). New York, NY: Springer.
- Carlson, D., Borman, G. D., & Robinson, M. (2011). A multi-state district-level cluster randomized trial of the impact of data-driven reform on reading and mathematics achievement. Educational Evaluation and Policy Analysis, 33, 378–398.
- Clune, W. H., & White, P. A. (2008). Policy effectiveness of interim assessments in Providence public schools ( WCER Working Paper No. 2008-10). Madison: University of Wisconsin–Madison, Wisconsin Center for Education Research.
- Connor, C. M., Jakobsons, L. J., Crowe, E., & Meadows, J. (2009). Instruction, differentiation, and student engagement in Reading First classrooms. Elementary School Journal, 109(3), 221–250.
- Connor, C. M., Morrison, F. J., Fishman, B., Giuliani, S., Luck, M., Underwood, P. S., … Schatschneider, C. (2011). Testing the impact of child characteristics × instruction interactions on third graders' reading comprehension by differentiating literacy instruction. Reading Research Quarterly, 46(3), 189–221.
- Connor, C. M., Piasta, S. B., Fishman, B., Glasney, S., Schatschneider, C., Crowe, E., … Morrison, F. J. (2009). Individualizing student instruction precisely: Effects of child × instruction interactions on first graders' literacy development. Child Development, 80(1), 77–100.
- Cordray, D., Pion, G., Brandt, C., & Molefe, A. (2012). The impact of the Measures of Academic Progress (MAP) program on student reading achievement (NCEE 2013-4000). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Science, U.S. Department of Education.
- Coyne, M. D. (2015). Effectiveness of a beginning reading intervention: Compared with what? Examining the counterfactual in experimental research. In C. M. Connor, & P. H. McCardle (Eds.), Advances in reading intervention: Research to practice to research (pp. 221–230). New York, NY: Brookes Publishing.
- Datnow, A., Park, V., & Wohlstetter, P. (2007). Achieving with data: How high-performing school systems use data to improve instruction for elementary students. San Diego: University of Southern California, Rossier School of Education, Center on Educational Governance.
- Davidson, K., & Frohbieter, G. (2011). District adoption and implementation of interim and benchmark assessments ( CRESST Report 806). Los Angeles, CA: National Center for Research on Evaluation, Standards, and Student Testing.
- Dedrick, R. F., Ferron, J. M., Hess, M. R., Hogarty, K. Y., Kromrey, J. D., Lang, T. R., … Lee, R. S. (2009). Multilevel modeling: A review of methodological issues and applications. Review of Educational Research, 79(1), 69–102.
- Goertz, M. E., Nabors-Olah, L., & Riggan, M. (2010). From testing to teaching: The use of interim assessments in classroom instruction ( CPRE Research Report #RR-65). Philadelphia, PA: The Consortium for Policy Research in Education.
- Hill, C. J., Bloom, H. S., Black, A. R., & Lipsey, M. W. (2008). Empirical benchmarks for interpreting effect sizes in research. Child Development Perspectives, 2(3), 172–177.
- Indiana State Board of Education. (2006). A long-term assessment plan for Indiana: Driving student learning. Indianapolis, IN: Author.
- Kennedy, M. M. (2005). Inside teaching: How classroom life undermines reform. Cambridge, MA: Harvard University Press.
- Konstantopoulos, S., Miller, S., & van der Ploeg, A. (2013). The impact of Indiana's system of interim assessments on mathematics and reading achievement. Educational Evaluation and Policy Analysis, 35(4), 481–499.
- Luce, T., & Thompson, L. (2005). Do what works: How proven practices can improve America's public schools. Dallas, TX: Ascent Education Press.
- May, H., & Robinson, M. A. (2007). A randomized evaluation of Ohio's Personalized Assessment Reporting System (PARS). Philadelphia: University of Pennsylvania Consortium for Policy Research in Education.
- Michael & Susan Dell Foundation. (2009). Performance management report. Austin, TX: Author.
- Nye, B., Hedges, L. V., & Konstantopoulos, S. (2000). Effects of small classes on academic achievement: The results of the Tennessee class size experiment. American Educational Research Journal, 37, 123–151.
- Perie, M., Marion, S., Gong, B., & Wurtzel, J. (2007). The role of interim assessments in a comprehensive assessment system: A policy brief. Retrieved from http://www.achieve.org/files/TheRoleofInterimAssessments
- Plank, S. B., & Condliffe, B. F. (2013). Pressures of the season: An examination of classroom quality and high-stakes accountability. American Educational Research Journal, 50(5), 1152–1182.
- Sawchuk, S. (2009, May 13). Testing faces ups and downs amid recession. Education Week, 28, pp. 1, 16–17.
- Shepard, L., Davidson, K., & Bowman, R. (2011). How middle school mathematics teachers use interim and benchmark assessment data ( CSE Technical Report). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).
- Slavin, R. E., Cheung, A., Holmes, G. C., Madden, N. A., & Chamberlain, A. (2013). Effects of a data-driven district reform model on state assessment outcomes. American Educational Research Journal, 50, 371–396.
- Stock, J. H., Wright, J. H., & Yogo, M. (2002). A survey of weak instruments and weak identification in generalized method of moments. Journal of Business and Economic Statistics, 20(4), 518–529.
- Tomlinson, C. A. (2000). Differentiation of instruction in the elementary grades. Champaign: ERIC Clearinghouse on Elementary and Early Childhood Education, University of Illinois.
- Wang, Y., & Gushta, M. (2013, September). Improving student outcomes with mCLASS Math, a technology-enhanced CBM and diagnostic interview assessment. Presented at the Society for Research on Educational Effectiveness, Washington, DC.
- Wooldridge, J. M. (2010). Econometric analysis of cross section and panel data. Cambridge, MA: MIT Press.