819
Views
16
CrossRef citations to date
0
Altmetric
Original Articles

The effectiveness of school-based decision making in improving educational outcomes: a systematic review

, ORCID Icon, &
Pages 61-94 | Received 08 Feb 2018, Accepted 09 Feb 2018, Published online: 28 Feb 2018
 

ABSTRACT

The rhetoric around decentralisation suggests school-based management improves education outcomes. Existing reviews on school-based decision-making have tended to focus on proximal outcomes and offer very little information about why school-based decision-making has positive or negative effects in different circumstances. The authors systematically searched for and synthesised evidence from 35 quantitative and qualitative studies evaluating 17 individual interventions on the effectiveness of school-based decision-making on educational outcomes. Devolving decision-making to the level of the school appears to have a somewhat beneficial effect on dropout, repetition and teacher attendance. Effects on test-scores are more robust, being positive in aggregate and for middle-income countries specifically. On the other hand, school-based decision-making reforms appear to be less effective in communities with generally low levels of education, where parents have low status relative to school personnel. The authors conclude that school-based decision-making reforms are less likely to be successful in highly disadvantaged communities.

Acknowledgements

This review was supported by the Department for International Development’s (DFID’s) Systematic Reviews Programme. We acknowledge contributions to the review process by Tejendra Pherali, Edwina Peart and Emma Jones at UCL Institute of Education and Claire Stansfield and Carol Vigurs at the Evidence for Policy and Practice Information and Coordinating Centre (EPPI-Centre).

Declaration of interest

None of the team members have any financial interests in the review, nor have any team members been involved in any other systematic review focused on this topic or in the development of any of the interventions investigated.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1. A recent paper by Evans and Popova Citation(2015) argues that divergent conclusions from systematic reviews tend to be driven by a reliance on different samples of research studies, which, in turn, are driven by differing criteria for inclusion. However, the sample of studies included in that review of reviews largely draws on studies which do not use systematic methods of search, appraisal or synthesis.

2. Income classifications reflect the World Bank’s income classification system. Classifications were linked to the start date of the intervention under investigation, rather than the current classification.

3. Studies written in other languages were excluded, unless English translations were available, as we did not have any further linguistic ability represented within the review team.

4. We developed a risk of bias assessment tool based on ‘Suggested risk of bias criteria for EPOC reviews’ (Cochrane EPOC, 2014), with additional questions suggested by Hombrados and Waddington (Citation2012) .

5. As existing systematic reviews (for example, Petrosino et al, Citation2012) have indicated a lack of relevant studies on education decentralisation in developing countries published prior to 2000, we limited our electronic searches to studies published in or after 2000. We did set any such data boundary for our other search methods (for example, review of reviews).

6. We were unable to complete forward citation chasing of included studies.

7. An additional four studies were identified through reference searching and expert checking.

8. In two of the three studies (Paes de Barros & Mendonca, 1998; De Umanzor et al. Citation1997), we identified a substantial risk of confounding factors influencing the impact estimates, while there was a high risk of bias due to attrition in the final study (Cueto et al. 2008). Other risks were also identified, including risk of motivation bias and clustering, in one of the three studies (De Umanzor et al. Citation1997).

9. Carnoy et al. (Citation2008) was excluded from meta-analysis due to missing data.

10. Comparisons of effect sizes measured in standard deviations are comparisons of relative measures, requiring, for example, assumptions concerning the distribution and measurement of a phenomenon or trait (for example, educational performance as measured by a test) in the samples to be compared. It was not possible in every case to calculate SMD, particularly for studies which did not report standard deviations of the outcome variable and/or the number of observations in the study or the statistics required to compute or estimate the standard deviation or other required statistic (for example, t, z or F statistics, p-values and standard errors). However, we employed appropriate methods to generate comparable effect-sizes wherever possible, including using the Campbell Collaboration online effect size calculator http://www.campbellcollaboration.org/resources/effect_size_input.php.

11. Aggregated tests are multi-subject tests. The National Achievement Test in the Philippines comprises Math, English, Filipino, Science and Social Science. The test used by Bold et al. (Citation2013) covers only Math and English.

12. Of the 14 studies that measured the impact of a school-based decision-making intervention on student language test scores, some reported test data for more than one language. The languages tested are usually the language of instruction in school, where available.

13. Results of moderator analyses by type of evaluation method used (with or without randomised assignment) and risk of bias assessment is available in the technical report (Carr-Hill, Rolleston, and Schendel Citation2016). The results for RCTs and quasi-experimental studies are similar overall nor could we identify any significant differences in the effects indicated by low and medium risk of bias studies.

14. In some instances, schools were given grants for explicit purposes, for example, the hiring of contract teachers (Blimpo and Evans Citation2011; Bold et al. Citation2013; Duflo, Dupas, and Kremer Citation2012). However, no study in the sample was able to estimate the marginal impact of allocating grants, because all studies included a grant component in treatment and control arms.

Additional information

Notes on contributors

Roy Carr-Hill

Professor Roy Carr-Hill has taught in Sussex University (1971-74), Universidade Eduardo Mondlane, Mozambique (1978-81) and the UCL Institute of Education (1993-ongoing); worked on social indicators at the OECD Paris (1974-77); and since 1981 has been a researcher at the MRC Medical Sociology Unit Aberdeen (1981-84); Centre for Health Economics (1984-2011), School of Political and Social Sciences, Hull (1990-93). He has worked in 35 developing countries as a consultant for several of the bilateral and multilateral agencies on a wide variety of evaluation and review consultancies, spanning, education, health and justice. He is currently based at UCL-IoE and researches on accountability, adult literacy, rationality of drop-out, exploitation of women teachers, interplay between education and extremism; and is currently writing book on understanding social data.

Caine Rolleston

Dr. Caine Rolleston is Senior Lecturer in Education and International Development at UCL Institute of Education and Lead Education Researcher at the Young Lives Project, University of Oxford.  His research  focuses on analysis of education systems in low and middle income countries, including school quality and effectiveness and educational access and equity.

Rebecca Schendel

Dr. Rebecca Schendel is Lecturer in Education and International Development at the UCL Institute of Education. Her research focuses on the relationship between higher education and human development in sub-Saharan Africa, with particular emphasis on questions of pedagogy, student learning and processes of institutional change.

Hugh Waddington

Hugh Waddington is Senior Evaluation Specialist and currently Acting Head of 3ie's Synthesis and Reviews Office. He has a background in research and policy, having worked previously in the Government of Rwanda, the UK National Audit Office and the World Bank, and before that with Save the Children UK and the Department for International Development. He is managing editor of the Journal of Development Effectiveness and co-chair of the International Development Coordinating Group (IDCG) of the Campbell Collaboration.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 216.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.