366
Views
8
CrossRef citations to date
0
Altmetric
Articles

A Statistical Approach to Identifying Schools Demonstrating Substantial Improvement in Student Learning

, , &
Pages 70-91 | Published online: 08 Feb 2012
 

Abstract

The rising tide behind the school turnaround movement is significant, as national education leaders continue to call for the rapid improvement of the nation's lowest-performing schools. To date, little work has been done to identify schools that are drastically improving their performance. Using publically available school-level student achievement data from Minnesota, we begin to detail the statistical and practical steps necessary to identify schools rapidly improving their performance.

Notes

1It is worth noting that some of the case studies identified in the report (e.g., Picucci, Brownson, Kahlert, & Sobel, Citation2002) appear to focus more on high-poverty, high-performing schools than on turnaround.

2In addition, at least 40% of students had to be scoring below proficiency on state tests.

3At present, there exist no fewer than three connotations for the phrase turnaround school. Some local education agencies have used the phrase to mean poorly performing schools targeted for some program intended to facilitate substantial changes in school operations and student academic performance. One such example includes Chicago Public Schools, whose Office of School Turnaround is charged with overseeing the restructuring of schools. Other researchers (e.g., Calkins, Guenther, Belfiore, & Lash, Citation2007) have used the term turnaround to refer to schools that are implementing a particular model of school change, regardless of whether it affects student achievement. Finally, Herman et al. (Citation2008) referred to turnaround schools as those schools that were chronically low-performing but produced substantial improvements in student performance in a short amount of time. Our work focuses on this latter definition. To avoid confusion, this article uses the phrase substantial-improvement schools from this point forward.

4School leaders also look at the percentage of students who are proficient because the leaders may have set a building-level goal using this indicator and because percent proficient is one indicator highlighted in NCLB (i.e., all students proficient by 2014).

5Ho (2008) offered the following five reasons why percent proficient should not be used as a sole indicator for school performance: (a) cutoff scores used to determine proficiency rates are largely arbitrary; (b) the magnitude of proficiency rate increases or decreases over time, across groups, or across groups over time is largely dependent on the chosen cutoff score; (c) gap differences over time between groups of interest (e.g., based on ethnicity) also are dependent on the cutoff score; (d) the dependencies produced using proficiency rates for gaps, trends, and gap trends over time are not easily predictable or remedied; and (e) the arbitrariness of cutoff score determinations allows policymakers to manipulate school performance findings to be consistent with their viewpoints.

6This process was tested using school-level data from the MCA-II, the standards-based assessment annually administered to students throughout Minnesota for accountability purposes. For each of the nine types of schools (schools serving the same grade levels of students) examined, only a single factor emerged: schoolwide achievement. The one-factor solutions were supported by all commonly prescribed methods of determining factors: eigenvalues <1, inflection points in scree tests, etc.

7This criterion is aligned with the connotation underlying the term turnaround. Students in these schools must now show good performance on state standards-based achievement tests.

Note. Italicized numbers represent cutoff points made by the researchers prior to testing the model using publically available data on Minnesota schools. SEA analysts might wish to adjust these cutoff points to better reflect expectations within their state.

8The analysis for this example would include data for the years 2004–2009, with a focus on changes in schoolwide achievement between 2006 and 2009 (a 3-year span, consistent with screening criteria used by Herman et al., Citation2008, for turnaround schools). Other 3-year spans can be used as well.

9In establishing the criteria for schools such as this one, analysts must make statistical cutoff points. The cutoff point for this model—that 75% of the year-to-year changes must be nonnegative—represents a compromise between two positions. First, though desirable, it is unrealistic to expect schools to demonstrate nonnegative changes from year to year for 100% of grade levels and subjects. However, the idea of substantial improvement does imply that a clear majority of year-to-year changes need to be nonnegative. The 75% cutoff point for this criterion would represent a compromise between a simple majority (51%) and absolute majority (100%).

10When this statistical model was applied to Minnesota's school-level data, a number of schools—especially high schools—failed to achieve the label of substantial-improvement schools because of this criterion. Several of these schools have demonstrated remarkable improvements (changes in percent proficient—from single digits to nearly 40%). However, these schools have yet to come close to the level of other schools within the state. Perhaps states can create another label for these schools to signify their improvement.

11Less data is publicly available for high schools than for other types of schools. Only two tests are given each year (reading for 10th graders and mathematics for 11th graders). Analysts included graduate rate within the factor analyses. Graduation rates were unavailable for 2009.

12Technically, factor scores came from principal components analysis.

13Lacking a clear definition of low-performing, the analysis team chose a cutoff point of 25th percentile as the benchmark for the low-performing schools. Arbitrary and relative as this definition may be, it is clear that academic performance of students in these schools is substantially less than standards set by the state. For instance, for the school category K–4, the mean score on the MCA-II for third graders for 2006 for the full set of schools is 358.45, and the mean score on the MCA-II for the bottom 25% of schools on the school performance factor is 352.89, which is a sizeable difference of more than 1 SD.

14Note that 2004 and 2005 data for schools are based on scores from a different test (MCA). However, the same grades and subject areas were tested in the same schools. The process of factor analyzing school means by grade and subject for each school adjusts for differences in scaling.

15A d of 0.25 was adopted based on the benchmark mentioned in Herman et al. (Citation2008).

a Standard errors across grade and subject combinations were found to range from 0.9 scale score points to 3 scale score points. All changes between 2006 and 2009 for the seven substantial-improvement schools exceed possible ranges of scores established by standard errors (i.e., positive changes were still apparent). If maximum values for 2006 estimates (maximum values established by adding standard error) and minimum values for 2009 (minimum values established using standard error) were substituted for average values, average changes in standard deviation units would not all exceed the d = 0.25 threshold.

b These three schools all demonstrated rapid improvements but could not be considered turned around. For example, 4-year graduation rates in these three schools were all less than 50% (in one school it was less than 10%).

Note. Locale = type of locality listed in the National Center for Education Statistics Common Core of Data (CCD). Enrollment = student enrollment in school per MDE data for 2009. % FRPL = percentage of students in school eligible to receive free or reduced-price lunch in 2009. % Minority = percentage of students in school who are not White in 2009. Title I Eligible = eligibility to receive services for federal Title I services per data in CCD. Factor Score = score for school associated with dominant factor that emerged from principal component analysis (conducted on schools within same category of schools). Percentile = percentile for school based on factor score, relative to other schools within category. Sources: Data files available on Minnesota Department of Education's Web site www.education.state.mn.us/MDE/Data/Data_Downloads/index.html; National Center for Education Statistics Common Core of Data. Factor score derived from all grade/subjects (e.g., Grade 3 reading, Grade 3 mathematics) at a given school.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 343.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.