296
Views
13
CrossRef citations to date
0
Altmetric
Articles

Institutionalisation of government evaluation: balancing trade-offs

&
Pages 289-309 | Published online: 24 Sep 2010
 

Abstract

Carefully designed and implemented evaluations can improve people's welfare and enhance development effectiveness. This paper investigates institutions in Mexico, Chile, and Colombia, and shows that for the successful inception of an institutionalised system for evaluation, three common factors stand out: the existence of a democratic system with a vocal opposition, the existence of influential monitoring and evaluation (M&E) champions to lead the process, and a clear powerful stakeholder. Mexico's CONEVAL is the most independent of the three bodies, mainly due to the fact that it is reporting to an executive board of independent academics; Chile's Dipres is the best placed in terms of enforcement, with its location within the Ministry of Finance and control of an independent budget; and Colombia's SINERGIA helps promote a culture of utilisation of evaluations as a project management tool. However, actual usage of M&E information and the resulting effect upon development effectiveness are the benchmarks of success. The paper concludes that an explicit and thoughtful process of assessing the needs, the focus, and the emphasis of the system should serve officials and champions to identify adequate arrangements for the particular country context and understand how to better respond to the forces pushing for the creation of new M&E units and bodies.

Notes

1. Both monitoring and evaluation systems are most useful if they are incorporated into a programme or intervention from its inception; however, in the case of evaluation, a number of techniques allow for evaluations to be realised later in the programme life.

2. To complicate matters, however, the concept of evaluation encompasses a number of different methodologies, including consistency and results evaluation (a logframe type of evaluation), process evaluation, benefit incidence and targeting evaluation, beneficiary satisfaction evaluation, a range of qualitative evaluations, impact evaluations, and a host of others. Each of these draws on different data sources, and in particular draws on programme monitoring data to a different extent. While an impact evaluation could in theory be carried out with minimal interaction with the programme and programme staff, the process evaluation naturally has to be done in close collaboration with the same.

3. Four interrelated dimensions of evaluation independence have been recognised by the Evaluation Cooperation Group, including: organisational independence; behavioural independence; protection from external influence; and avoidance of conflicts of interest.

4. This section draws on the 3ie report Institutionalising evaluation: a review of international experience (Briceño and Gaarder, Citation2009).

6. Diario Oficial, México (Citation2004a, Citation2004b, 2005).

7. An administrative department with ministerial status.

8. Diario Oficial, México (Citation2005).

9. The broader picture of government M&E activities comprises other institutions that perform monitoring and auditing activities at the central level. Those practices are more aligned with performance-based management practices. They are basically monitoring and budget execution follow-up activities led by the SHCP, and auditing activities carried out by the SFP. There are ongoing initiatives to create units of evaluation under each of these institutions. Three areas can therefore be identified where an institutionalisation gap remains in Mexico: the alignment of central evaluation efforts between these new evaluation units and CONEVAL; the lack of evaluation at the sub-national government levels; and the relative absence of institutionalised evaluations (impact evaluation and other, such as process evaluation) in the non-social sectors.

10. The Board also includes the Minister of Social Development, and the Executive Director of CONEVAL.

11. It comprises 32 officials from social development entities at the federal level; the heads of the Ministries of Social Development, Education, Health, Labour, Agriculture, the Environment and Natural Resources; a representative from each of the national municipal associations; and the presidents of the Social Development commissions in the Senate and Chamber.

12. Criteria for members include being or having been members of the national system of researchers and having broad expertise in the subject of evaluation or poverty measurement.

13. In addition, they mandate for Internet disclosure of contact information of the external evaluator and the programme responsible, the type of evaluation, databases, data collection instruments, a methodological note with description of the methodologies and models used along with the sampling design, and sample characteristics; an executive summary with main findings and the recommendations of the external evaluator; and finally, the total cost of the external evaluation, including the source of funding.

16. In practice, DEPP's head also reports in an ad-hoc manner to the Advisory Minister to the Presidency, as one of the main users of the M&E information provided.

17. Resources for evaluations come primarily from the programmes; some evaluations have had support from multilaterals that earmark resources for evaluation within the loan budgets.

18. In the case of an urban work-fare programme, Empleo en Acción, the decision of closing the programme was prior to the evaluation results (indeed, the evaluation was nick-named ‘the autopsy’); and in the case of a youth training programme, Jovenes en Accion, it was completely transformed before the availability of results, in spite of substantial positive effects found afterwards.

19. Monitoring information is used extensively by the President and his office as a control tool.

20. This approach will tend to favour ‘stronger’ programmes and institutions, leaving perhaps those most in need of evaluation the possibility to opt out.

21. Diario Oficial, México (2007).

22. Aspects to improve are classified into three types according to their nature: specific (those that are the responsibility of the programme officers), institutional (those requiring attention from various units within the agency), and inter-institutional (requiring attention of external agencies) or inter-governmental (requiring attention of different government levels). The sector agencies themselves classify the aspects as of high, medium or low priority, according to their perceived contribution to the achievement of the programme's goal.

23. Should they exist, however, confounding effects will need to be dealt with to actually give a sensible attribution to the effect of evaluation practices.

24. For an interesting example on this potential measure, examining the correlation between evaluation results and budget growth of evaluated programmes in Korea, see Kim and Park (Citation2007) and Park (Citation2008).

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 216.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.