1,023
Views
11
CrossRef citations to date
0
Altmetric
Original Articles

Supporting Instructional Improvement in Low-Performing Schools to Increase Students’ Academic Achievement

Pages 235-248 | Published online: 12 Feb 2013
 

ABSTRACT

This is an impact evaluation of the Technical Support to Failing Schools Program, a Chilean compensatory program that provided 4-year in-school technical assistance to low-performing schools to improve students’ academic achievement. The author implemented a quasi-experimental design by using difference-in-differences estimation combined with propensity scores matching procedures to estimate treatment effects. The main findings were the following: (a) the program had positive effects on fourth-grade students’ achievement in both language and mathematics; (b) program effect size was 0.23 standard deviations, and not sensitive to control for covariates; (c) there were larger effects for students in the middle part of the students’ test-score distribution; (d) after the intervention had ceased, the program impact declined rapidly; and (e) the program reduced grade retention by 1.5 percentage points.

ACKNOWLEDGMENTS

I would like to thank Richard J. Murnane, John B. Willett, Fernando Reimers, and two anonymous reviewers for their valuable commentaries and suggestions.

Notes

1. Launched in 1965, the program supplied additional funds to local educational administrations that served areas with defined concentrations of low-income families. The goal was to improve the education provided to disadvantaged students. Given the high level of autonomy of U.S. local authorities, Title I encompassed a great many heterogeneous educational interventions under this single funding umbrella. Technically, the current No Child Left Behind Act of 2001 (2002) is a reauthorization of this program.

2. The P-900 Program was the most important Chilean compensatory educational program implemented during the 1990s. It provided extra support (including teaching materials, teacher training, and extra-curricular activities) to low-performing primary schools that served low-income students. It was the most direct antecedent of the TSFS Program.

3. Certainly, according to studies such as Trends in International Mathematics and Science Study and Programme for International Student Assessment, U.S. and Chilean students—on average—perform very different in mathematics and language tests; nevertheless, in relative terms, these international comparisons show that both educational systems face similar challenges related to high levels of inefficiency and inequity. Additionally, both countries have educational systems in which local authorities are responsible for the administration of public schools, which creates a complex scenario for the design and implementation of nation-level educational policies; as a consequence, U.S. and Chilean policymakers are promoting the kind of programs here discussed in the context of standard based reforms.

4. The implementation of the program was limited to the Santiago metropolitan area for financial reasons, and as a pilot for other regions of the country.

5. The combined average score on these same tests was 250, nationally, and the standard deviation of the school mean was about 28.

6. I did not use SIMCE-2002 scores as the PRE measure, because the TSFS Program started in 2002. Because SIMCE was applied at the end of the academic year, the 2002 data may be affected by 1 year of treatment.

7. In the rest of the article, schools that entered the TSFS Program are referred to as the program schools and the schools that did not participate in the program are referred to as the comparison schools. Preprogram students are identified as the cohort of students who were tested before implementation and postprogram is the cohort of students tested after implementation. Finally, the group of students who attended program schools after program implementation is referred to as treated students.

8. Note that since program participation was decided at school level, I implemented the matching procedure at school level as well.

9. One of the 70 program schools did not have POST measures and was dropped from the sample.

10. As part of the sensitivity analysis, I also estimated program impact on grade retention rate (i.e., a school-level outcome).

11. Following Singer (Citation1998), I estimated Model 1 and all other models using the MIXED procedure in SAS (Version 8.2, SAS Institute, Cary, NC) to account for the clustering of students within school, and to correctly compute standard errors.

12. To compute effect sizes, I divided the estimated regression coefficients by the standard deviation of the corresponding outcome in the program group: math (SD = 47.57) and language (SD = 48.33).

13. To compute effect sizes, I divided the estimated regression coefficient by the standard deviation of the corresponding outcome in the program group: Grade retention rate (SD = 3.05). Note that it is not possible to directly compare the reported program effect sizes on test scores and grade retention rates because the former is estimated by using a student-level measure as the outcome variable, while the latter is estimated by using a school-level outcome variable, which tends to overestimate program effect sizes. For comparison, when I replicated the estimates of program impact on test scores by using SIMCE school means as the outcome variable, the estimated program effect sizes increased to 0.83 standard deviations in mathematics and 0.57 standard deviations in language (from an original 0.23 standard deviations estimates in both subjects). This analysis is relevant to compare my results with previous research on compensatory programs in Chile; for example, Chay et al. (Citation2005) used exclusively school-level data.

14. In analyses not reported here, I found a similar pattern for science test scores: the TSFS Program had a statistically significant positive effect on 2005 and 2006 students’ achievement, but the estimated program effect size decreased from 0.29 to 0.16 standard deviations, respectively.

15. Actually, after the TSFS intervention, the government did not apply sanctions to schools that did not meet the stated program goals.

16. Borman et al. (Citation2003) estimated that comprehensive school reforms needed about 5 years to significantly increase their impact on students’ academic achievement. As a way of comparison, the mentioned Chilean P-900 program applied a 2-year cycle intervention

17. An alternative explanation is teachers’ turnover, which is a common obstacle to school improvement programs in schools serving at-risk students. Unfortunately, although I have some impressionistic evidence about that issue provided by TSFS consultant teams, I have no data to test this hypothesis.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.