Abstract
Do quality-at-entry assessments enhance the delivery of development projects? In this paper we take advantage of approval and execution systems in place at the Inter-American Development Bank (IDB) to examine whether projects that have higher quality-at-entry – captured through grading scores provided by a checklist – perform better in terms of project implementation performance indicators. Implementation indicators include measures based on actual versus planned schedule of activities and cost outlays, as well as per cent of loan disbursed. The analysis suggests higher scores on project logic and economic analyses at entry have had a positive impact on project performance. However, monitoring and impact assessment scores had limited impacts on performance. The evidence supports the hypothesis that the use of a checklist can be an effective framework for assessing quality-at-entry for IDB projects, though there is scope to improve the checklist for certain indicators.
Acknowledgements
The authors would like to thank Shakirah Cossens, Diego Cortes, Ange Liu, and Luis F. Diaz for supporting data collection efforts, Carola Alvarez and Oscar Mitnik for helpful comments, and Josh Brubaker for excellent research assistance. We are grateful for the support of the Office of Strategic Planning and Development Effectiveness of the Inter-American Development Bank in conducting the study, and to participants of the SPD Half-Baked Lunch seminar series for their helpful comments.
Disclosure statement
No potential conflict of interest was reported by the authors. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the Inter-American Development Bank, its Board of Directors, or the countries they represent.
Supplemental material
Supplementary Materials are available for this article which can be accessed via the online version of this journal available at https://doi.org/10.1080/00220388.2018.1554210
Notes
1. There are three sub-sections, with a third section that only needs to be assessed for ‘policy-based’ loans. We do not include these types of loans into this analysis.
2. Beginning in 2015, SPD has now categorised values greater than two as ‘outliers’. For our analysis, we use the original categories to develop our cost and schedule performance indicators.
3. We also ran all specifications using the average progress scores, which produced results very similar to results using the latest scores, so we choose to present results using latest scores. Results for average scores can be obtained from the corresponding author.
4. We tested for normality of the residuals using the ‘sktest’ command in STATA, which tests for skewness and kurtosis in the error distributions. We ran the tests because a number of observations are bunched at one, that is, where actual and planned performance coincide. Test results lead us to not reject normally distributed errors. To probe the robustness of results, we normalised all of the cost and schedule deviation variables, and ran generalised linear models (GLMs). The results for all GLMs are qualitatively very similar to the OLS results, thus we include only the OLS results here in the paper. Results can be obtained from the corresponding author.
5. We also ran the estimations using the ‘exchangeable’ structure, which produced similar results that can be obtained from the corresponding author.