ABSTRACT
Charter schools were originally intended to improve the American public education system by introducing innovative practices that could be replicated elsewhere. Charter critics and proponents alike, however, question the degree to which charter schools are truly innovative. While alarm has been raised about apparent conformity among charter schools, scant literature explores how this conformity came to pass. We test the hypothesis that innovation might be particularly hampered in states with stringent charter school authorizing regulation, which may induce charter authorizers and leaders to prefer schooling models that are pleasing to authorizers and focus narrowly on student achievement. To test this hypothesis, we develop a typology for charter schools that scores how innovative they are based on their curriculum, pedagogy, learning modality, themes, and population served. We evaluate how these innovation scores correlate with charter authorizing regulations. Overall, there is a strong and negative association between regulation and innovation.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1 We assume a one-year lag between charter authorization and opening. In other words, we assume that a school that opened for the 2015-16 school year was authorized in 2014 and subject to the charter authorizing regulations in place at that time. Consequently, our analysis uses the 2014 NACSA score for schools opened in 2015-16, 2015 score for schools opened in 2016-17, and 2016 score for schools opened in 2017-18. The time between authorization and opening varies considerably from school to school (and some that are authorized are never opened) but a review of charter school petitions conducted for previous research indicates that schools typically open in the calendar year after which they are authorized.
2 We assume that charters were subjected to the regulatory regime in place one calendar year before the school year in which they opened. For example, a charter that opened in 2016-17 is assumed to be subjected to the regulations in place in 2015. A sensitivity analysis confirms that the results are the same if we assume that there is no lag between the events.
3 We omit a state fixed effects variable because NACSA scores are static year-to-year more often than they change. Therefore, state indicators and NACSA scores are highly collinear.
Additional information
Notes on contributors
Ian Kingsbury
Ian Kingsbury is a senior fellow at the Educational Freedom Institute. He received his PhD in education policy from the University of Arkansas.
Jay Greene
Jay Greene is a senior research fellow in The Heritage Foundation’s Center for Education Policy. He received his PhD in government from Harvard University.
Corey DeAngelis
Corey DeAngelis is the national director of research for the American Federation for Children. He received his PhD in education policy from the University of Arkansas.