302
Views
3
CrossRef citations to date
0
Altmetric
Research paper

Diagnostics and field experiments

Pages 80-84 | Received 15 Dec 2016, Accepted 06 Oct 2017, Published online: 18 Jun 2021

Highlights

Field experiments have become a popular tool in social science.

Field experiments should do more to focus on structural development problems.

Rodrik’s diagnostic framework can help to improve experimental design.

Abstract

Field experiments have been embraced in development economics and political science as a core method to learn what development interventions work and why. Scientists across the globe actively engage with development practitioners to evaluate projects and programmes. However, even though field experiments have raised the bar on causality, they are often too narrowly defined and lack focus on structural development problems. Researchers and development practitioners should do more to improve the diagnostic process of the problem under study. Rodrik’s (2010) diagnostic framework provides a useful tool to improve the design and relevance of field experiments. Specifically, more should be done to seek coordination across studies, broaden the scope for interdisciplinary collaborations and seek peer review to increase validation and verification of evaluations. Only then can we increase knowledge aggregation and improve development policy making.

1 Introduction

Increasing African incomes remains high on the policy agenda. While over past decades some countries have been able to move up the development ladder, others remain stagnant with high levels poverty, mortality and food insecurity. Key constraints involve the absence or poor functioning of input and output markets, high transportation costs, and unfavourable biophysical conditions. Inspired by the success of the Green Revolution in Asia, a key development strategy has been to focus on the transfer of knowledge and technology to lift the African continent out of poverty. However, the success of these programs has been limited. For example, while technologies to improve yields and input-output convergence are available at a low cost, adoption and diffusion remains limited (CitationFoster and Rosenzweig, 2010). Part of the explanation is that the continent is too diverse and decentralised to respond to top down and technocratic Green Growth type strategies (CitationFrankema, 2014). Over past decades, researchers and policy makers have increasingly recognized that differences in culture, preferences, policies and institutions matter, placing an importance on understanding institutional diversity at the center of the research and policy agenda. CitationSchouten et al. (2017) argue that in order to improve African food security institutional diagnostics is needed.

The role of institutions in development extends beyond debates about how to increase food security and promote growth. In recent years, a consensus has emerged that elevates institutions as a main determinant of economic development (CitationRodrik et al. 2004). As a result, institutional reform takes a central place in current development planning and policy objectives. Institutions are broadly defined as “the humanly devised constraints that shape human interactions” (CitationNorth, 1990). The institutional framework encompasses many dimensions, from slow moving variables related to culture and customs (e.g. CitationAcemoglu et al., 2001; CitationWilliamson, 2005) to elements more amendable to change due to various pressures (CitationAustin, 2008) and interventions by external agents. Unfortunately, our understanding of the determinants of institutional change, and how they relate to development outcomes is fragmented. This is due in part to key methodological flaws in many studies. Up until about two decades ago, researchers in political science and development economics largely relied on econometric models to establish causal claims. Using regression models researchers sought to include variables to control for potential confounding effects. However, key questions about inference remain, specifically related to endogeneity (how do I know what I am observing is X causing Y and not Y causing X?) and omitted variable bias (how do I know it is not a third variable, Z, doing the work causing changes in both X and Y?). Models are poorly fit to tackle these questions. How do researchers know if they are controlling for all relevant factors?

Critique has been mounting. For a long time, economists mistook “models and arguments that are valid only in specific circumstances for universal remedies” (CitationRodrik, 2010: 34). The resulting one-size-fits-all policy prescriptions, however, do not do sufficient justice to potential parameter heterogeneity—is the link between institutions and development the same for war-torn Sierra Leone and resource-rich Botswana? In each case the circumstances are different. Put more bluntly, Rodrik’s message is that economists should “stop acting as categorical advocates (or detractors) for specific approaches to development. They should instead be diagnosticians, helping decision makers choose the right model (and remedy) for their specific realities, among many contending models (and remedies).” (2010: 35). Rodrik argues that economists should recognize the importance of context and embrace experimentation.

Increasingly, researchers have moved away from model based inference to design based inference, where experimental design—through randomization—ensures comparisons across units assigned to an intervention and units assigned to a control condition are causal and unbiased. Over the years, the number of studies using experimental methods have increased dramatically. Across the globe social scientists now work alongside international organizations and governments in developing countries in the field to test policies and programmes. This implies a revolution in the way development planning is organized (see CitationRodrik, 2008). Researchers and policy makers actively engage to identify the knowledge gaps, constraints and possible interventions and then engage in a phase of policy testing to learn what works and why. As a result, a much greater emphasis is placed on diagnostics (CitationRodrik, 2010). In a recent paper, Esther Dulfo, who has been seminal in promoting the experimental method in economics, pushes the issue further and sets out an argument why diagnostics are important: “Economists are increasingly getting the opportunity to help governments around the world design new policies and regulations. This gives them a responsibility to get the big picture, or the broad design, right. But in addition, as these designs actually get implemented in the world, this gives them the responsibility to focus on many details about which their models and theories do not give much guidance.” (2017: 1).

Both Dulfo and Rodrik call for researchers to become much more serious about how studies are designed and how we learn. However, even though field experiments have raised the bar on causality, they are often too narrowly defined and theory-free. There is much that can be done to improve diagnostics, in turn bettering the design of experiments. Part of the problem may be that too little attention is paid to careful diagnostics of the problem under study at the design stage. I argue that CitationRodrik’s (2010) diagnostic framework provides a useful tool for improving the design of field experiments and ultimately to aggregating knowledge and improving development policy. By focussing on experiments, this paper address a methodological question on how to strengthen institutional diagnostics. Of course, sound diagnostics are not limited to experimental studies alone. In that respect, the issues raised below also apply to other non-experimental studies. I limit myself here to discussing experiments to allow for more focus and show where diagnostics could assist in improving the design of field experiments, as well as what we learn from them. The paper proceeds as follows. Below, I introduce what field experiments are and focus on the role of diagnostics therein. By way of illustration, I then describe a recent study on decentralised aid delivery in Sierra Leone, where researchers actively engaged with policy makers to develop a Community Driven Development programme and evaluation strategy. The paper closes with a set of promising developments that can further improve diagnostics in field experiments and ultimately increase development planning and knowledge aggregation.

2 The role of diagnostics in field experiments

In recent years, experiments have been embraced in development economics and political science as the core method to learn what development interventions work and why. Rodrik celebrates the rise of experiments as holding the promise of being “explicitly diagnostic in its strategy to identify bottlenecks and constraints” (2010: 41). What explains the experimental turn in social science? Primarily, researchers have been motivated by the increase in causal inference experiments offer. The experimental manipulation of an intervention enables the researcher to identify the impact of that particular intervention. CitationCox and Reid (2000) define experiments as: investigations in which an intervention, in all its essential elements, is under the control of the investigator. Control here has two dimensions. One is the control over the intervention (or in language borrowed from experiments in the medical field: treatment). Who is being experimented on? What are the rules of participation? Will the intervention be implemented in similar fashion across sites? The second dimension is control over assignment of an intervention. This forms the line between observational and experimental work, from researching mere correlations to establishing causal effects. The key design question for causal inference is what the world would have looked like without the intervention. This is, of course, impossible to observe. However, this ‘alternative world’ can be created through experimentation. By random assignment of a treatment, researchers create balance across treated and untreated observations on all observed and unobserved variables except the intervention under study.

A typical field experiment consists of several phases (see for a generic overview).Footnote1 Starting from a motivation, interest or research question, researchers typically seek out a partner (an NGO, government, company) to develop, implement and evaluate an intervention (a program – for example cash transfers for refugees or a training program for farmers). In an ideal case, the research and policy team go through an extensive diagnostic exercise to identify what the key policy outcome is (i.e. increased food security) and what the policy instruments are (i.e. farmer field schools and fertiliser distribution). The way in which a policy produces a change in an outcome is typically (but not always) described in a theory of change. A theory of change is essentially a model of the world which sketches out the causal links between inputs (resources and activities), outputs (direct changes as a result of the intervention) and intermediate and final outcomes (impact of the intervention). A theory of change is the outcome of a process by which researchers and policy makers identify core constraints and trade off various options. Two observations stand out. First (and unfortunately), in many cases, the process goes undocumented and there is very little detail available on the theoretical and practical issues which, in the end, produce a particular type of intervention. Second, oftentimes experiments are too narrowly defined, lacking focus on structural development problems. Part of the problem is that at the design stage, too little attention is paid to careful diagnostics of the question under study. Rodrik, too, is critical about some field experiments: “randomized evaluations typically generate relates to questions that are so narrowly limited in scope and application that they are in themselves uninteresting.” (2008:5). Perhaps this is because at present there is little guidance on how such a theory of change should be developed. This is where Rodrik’s diagnostics framework can be very useful. The diagnostics model developed by CitationHausmann et al. (2008a) provides concrete footholds on how to go about diagnostics.Footnote2 Rodrik’s framework was developed for growth diagnostics, but as he argues (CitationRodrik, 2010), the proposed method can readily be applied to the design process of field experiments incorporating insights from (micro-economic) theory and empirical studies. The researcher-policy maker team first create a decision tree where key constraints and strategies to relax these constraints are identified. At each juncture, the team assesses what they would expect to see. Here all relevant insights are brought to the table, based on theory, empirics and sound reasoning. Of course, it is often impossible to fully identify all core constraints; however, it is possible “in practice to reduce a long catalogue of failures to a considerably shorter list of most severe culprits” (CitationRodrik, 2010: 35) and test those empirically.

Table 1 Stages in a Field Experiment.

Several critical elements need to be considered in experimental design: at what aggregate level the intervention (for example a community development program) and measurement should take place (i.e. measuring changes in community cohesion, or individual level attitudes). Often the team agrees on a division of responsibilities where researchers manage and implement the measurement strategy and the development partner implements the intervention. Prior to implementation, researchers collect baseline measures of key variables, and then units are randomized to an intervention (or not). Randomization is done through a (public or computer based) lottery, usually within several strata (or blocks) to help create balance, after which the intervention is implemented. Subsequently, one or more rounds of outcome measures are gathered. Due to randomization, the data analysis becomes very straightforward, consisting of simple differences in means tests (however sometimes the design may require a more sophisticated test to account for clustering, noncompliance or lack of balance). Below I highlight a recent field experiment around the decentralization of development aid, zooming in on the diagnostics process.

3 Diagnostics and field experiments on community driven development

Driven to a large degree by World Bank funding during the early 2000s, the new development model became to put “poor people at the center of service provision: by enabling them to monitor and discipline service providers, by amplifying their voice in policy-making, and by strengthening the incentives for providers to serve the poor” (World Bank, 2004). Over the past decade, the World Bank has provided over $85 billion for such Community Driven Development (CDD) programs (CitationMansuri and Rao, 2012). To achieve this aim, CDD initiatives encourage local responsibility for service delivery or resource management, as well as efforts to decentralize authority and resources to local institutions, while at the same time improving the representativeness, inclusiveness, accountability and effectiveness of those institutions. CDD programs typically comprise of two components: (i) to improve the stock and quality of local public goods via the provision of block grants, and (ii) to democratize local decision-making via intensive social facilitation focused on the participation of marginalized groups. The size of the block grants, the intensity of facilitation and implementation details vary across contexts where CDD have been implemented. Several field experiments have explored the impact of CDD on a range of economic and social indicators (see CitationCasey, 2017 for an overview). Most studies report similar results: CDD is an effective development tool in that it produces public goods and modest economic effects projects, but achieves little in terms of improved local governance, social cohesion or welfare. It appears that changing local institutions or creating parallel institutions for development aid project implementation is challenging.

One such study was implemented by CitationCasey et al. (2012) and evaluated the short run effects of a CDD program implemented by the Decentralisation Secretariat of the Government of Sierra Leone. The program, called “GoBifo” (“Move Forward” in Krio– the local lingua franca), was set up as a field experiment and involved 118 communities randomly selected from 236 communities in two districts in Sierra Leone. The program was implemented between 2005 and 2009 and involved a financial block grant for community public goods or small enterprise development worth $4667 (about $100 per household). In addition, communities received training and social facilitation with the aim of building durable local collective action capacity. Before the launch of the program in 2005, the research team met extensively with the GoBifo policy team to discuss the program and adapt the CDD approach in line with local circumstances, focusing on the size of the block grant (about $5 per capita per year), the duration and intensity of the social facilitation (six months over four years) and the particular institutional structures of the Village Development Committees (involving a quota for women and youth). This diagnostic process involved locally available knowledge, experience from CDD programs in other contexts, insights from other studies on Sierra Leone, etc. As a second step, the team jointly developed a list of hypotheses, specifying groups of outcomes they expected the CDD program to change. In total, there are twelve hypotheses grouped around direct program effectiveness impacts related to implementation, the availability of publics goods and economic impacts (H1–H3) and more transformative impacts such as inclusiveness, representation and democratization (H4–H12), see .

Table 2 GoBiFo CDD Hypothesis.

Each hypothesis has a set of outcome indicators that capture change in the overall grouping. Unfortunately, the process by which the team developed the GoBifo intervention and arrived at these hypothesis is not fully described. However, the researchers did take a promising step in that direction by producing a detailed document, called a Pre-Analysis Plan (PAP), which detailed the hypothesis and outcomes. This represents a major break from the past, as it was the first study (in economics) that detailed what the research and policy team was expecting before any results were available, thus separating diagnostics from testing and interpretation. This ‘fixing of ideas’ is both about being transparent and effective. In partnerships, where multiple parties are hoping to learn from a study, it can be very helpful to specify expectations of the changes in outcomes researchers and policy makers expect to see. Moreover, it can prevent awkward discussions later when, for example, a policy maker would like to cherry-pick positive findings about program impact. PAPs avoid (the accusation of) data mining by researchers by clearly separating what researchers expect their experiment to show from patterns in the data, some of which are there simply due to chance. As such, PAPs should be considered an integral part of how diagnostics are performed and essentially how we learn. For an elaborate discussion on the use of PAPs and their benefits, see CitationHumphreys et al. (2013) and CitationMiguel et al. (2014, and references therein).

After the program had run for four years, the research team went back to both “GoBifo” and non-intervention villages and compared outcomes. They found substantial positive impacts on local public goods and economic activity (H1-H3), but no evidence for more inclusive local decision-making (H4–12, see CitationCasey et al., 2012). The evaluation report was well received and led to a discussion on whether the CDD should be rethought.Footnote3 The study has faced criticism for having assessed institutional changes over too limited of a time span. Assessing institutional changes over a mere couple of years is considered to be at the lower end of the scales of institutional change (see CitationWilliamson, 2005). In addition, institutional change may follow non-linear trajectories (CitationWoolcock, 2013). Thus, especially for interventions that aim to reshape institutions, a longer-term focus is warranted. Recently, I joined the research team in revisiting each community involved in their study, allowing us to assess changes over a 11-year period (recall that the program started in 2005). Our key questions focus on discovering whether or not the positive economic and public goods impacts persist, which would be an important message in an area with weak state capacity. If CDD is an effective aid delivery mechanism, then finding long-run effects would imply a different cost-benefit calculation. In addition, the longer-term follow-up allows us to assess impact trajectories over multiple survey rounds and look at processes that are shaped by slower and nonlinear dynamics. Alternatively, confirming a null-result on the institutional outcomes is also informative and may lead to redesigned CDD programs.

To design our follow-up study, we organised a series of discussions within both Sierra Leone and the World Bank to elicit their views and expectations. This feedback process helped us to sharpen research tools and engage with policy makers on the topic before presenting results. In addition, a central part of successful diagnostics is bringing on board existing evidence in order to identify and rank binding constraints to suggest possible policies. Yet, how well do we anticipate impacts, and in both academia and policy, do people update their beliefs about what works and by how much? More generally, does the accumulation of evidence change the allocation of donor funds? As Rodrik notes, “[p]olicy learning is all about updating ones priors” (2010:42). Taking this advice to heart, in the long-run CDD evaluation, we invited experts and non-experts to complete a short survey to see what they thought we would find. Here we follow ground breaking work by CitationDellaVigna and Pope (2016) who analyzed the expected forecasting of laboratory experiments. Specifically, we invited respondents from three distinct groups: i) policy makers working on CDD programs for multilateral aid agencies and policy makers within Sierra Leone; ii) economics students and iii) researchers who have been directly involved in evaluating CDD projects. This data was then used to evaluate the accuracy of their forecasts, compare accuracy across academic and policy experts, and evaluate whether providing experts with new information lead them to update their prior beliefs.

4 How can we improve field experiments to increase cumulative learning?

More time and resources need to be spent on the design of field experiments before experimental testing takes place. There are several important steps that can be taken to increase coordination, design and ultimately learning across studies.Footnote4

4.1 Coordination across studies

Donors and academics should aim to work on research programs rather than individual studies. As such, greater emphasis can be placed on what we learn across contexts, with similar program elements. One promising avenue is the Metaketa initiative of the Evidence in Governance and Politics (EGAP) group (http://egap.org/metaketa). Supported by DfID, EGAP is implementing several rounds of funding wherein research teams apply for funding to work in parallel on a predetermined theme, such as on the role of information campaigns in explaining political accountability or taxation and resource management. Research teams come together to agree on a common set of interventions that will be implemented cross several locations and context (for example, the first wave involved studies in Benin, Brazil, Burkina Faso, India, Mexico and Uganda). In addition, researchers agree on a set of consistent outcome measures. Within each study, a research team additionally implements an orthogonal intervention specific to the research setting.

4.2 Interdisciplinary work

There is great potential for compatibility between field experimentation and various types of qualitative measurement tools and research questions. Ideally, a research team is multidisciplinary, comprising expertise from economics, political science, anthropology, etc. In the diagnostic process, input from across the disciplines is essential; both formal theorising and insights emerging from qualitative investigations are essential for sound diagnostics. It helps prioritize what constraints are most binding, what interventions may work, and what researchers could expect in terms of possible programme effects. Typically, a mixed-methods study relies on surveys in large samples to estimate causal effects and then asks the qualitative researcher to examine the process of change. However, often the integration of methods is ad-hoc, with little clarity and unification of how and when they should ideally be organized. Other trade-offs matter too. How should data integration and aggregation take place? What is the guiding process for knowledge aggregation and how should conflicting conclusions be dealt with?

4.3 Validation, verification, and pre-registration

Seeking peer review at each stage of research is essential to good scientific practice. More and more research seminars are devoted to design presentations rather than results. In addition, more attention is being paid to increasing transparency. As Rodrik notes: “economists are subject to the same cognitive biases as others: overconfidence, tendency to join the herd, and proclivity to overlook contradictory evidence” (2010:40). Creating a reproducible workflow is key to the scientific process. Others should be able to review and inspect the choices made during the research process, from start to finish (see CitationGentzkow and Shapiro, 2014 and CitationBowers and Voors, 2016 amongst others). In addition, researchers are beginning to embrace pre-registration, where researchers publish details about their study design and analysis online before conducting the analysis. Several organisations (ASSA, EGAP and 3IE amongst others) offer repositories for such pre-analysis plans. This has the potential to greatly increase transparency (and may eventually reduce publication bias).

5 Conclusion

Field experiments have become a powerful new tool in the methods toolbox of social scientists, representing a huge step forward in social science towards credibly identifying causal effects in studies. However, field experiments are often too narrowly defined and do not focus on structural problems impeding local development. Part of the problem is that at the design stage too little attention is paid to careful diagnostics of the question under study. CitationRodrik’s (2010) diagnostic framework provides a useful tool for improving the design of field experiments and ultimately to aggregating knowledge and improving development policy. Rodrik’s message is that we should act as diagnosticians. Researchers and development practitioners should both spend a greater emphasis on the process of correctly implementing diagnostics, thus improving the capacity of studies to allow for learning (and ultimately policy improvements). Critical design issues include seeking greater coordination across studies, broadening the scope for interdisciplinary collaborations, including longer term horizons and increasing the role of validation, verification and pre-registration. Without these measures, the payoff to knowledge gleaned from experiments is substantially lessened. Thankfully, there is reason to be optimistic— in more and more studies, the evaluation process is improving, becoming more transparent and credible.

Acknowledgements

Support from the Netherlands Organisation for Scientific Research [N.W.O. grant #451-14-001] is gratefully acknowledged. Many thanks to the special issue editors, two anonymous referees, Guy Peters and NJAS workshop participants (2016) for useful feedback. I declare no competing interests. The funders had no role in the study design, data collection, analysis, interpretation, or writing.

Notes

1 For references on the ‘how to’ of field experiments, see CitationGlennerster and Takavarasha (2013), CitationGerber and Green (2012), CitationDunning (2012) and CitationGertler et al. (2011) amongst others.

2 See also follow-up work by CitationHausmann et al. (2008b) who develop a practical tool to guide this process.

3 Around the same time, another CDD program evaluation came out with similar conclusions, see CitationHumphreys et al. (2017).

4 See CitationDunning (2016) for an excellent review arguing for the need for increased transparency, replication and cumulative learning views reflected in this section.

References

  • D.AcemogluS.JohnsonJ.RobinsonThe colonial origins of comparative development: an empirical investigation?Am. Econ. Rev.915200113691401
  • GarethAustinThe reversal of fortune thesis and the compression of history: perspectives from African and comparative economic historyJ. Int. Dev.20820089961027
  • J.BowersM.VoorsHow to improve your relationship with your future selfRevista De Ciencia Política362016829848
  • K.CaseyR.GlennersterE.MiguelReshaping institutions: evidence on aid impacts using a pre-analysis planQ. J. Econ.1274201217551812
  • Casey, K., Community Driven Development. working paper.
  • D.CoxN.ReidThe Theory of the Design of Experiments. Monographs on Statistics and Applied Probability2000Chapman & Hall/CRCBoca Raton, FL86
  • StefanDellaVignaDevinPopeWhat Motivates Effort? Evidence and Expert Forecasts, NBER Working Paper 221932016
  • T.DunningNatural Experiments in the Social Sciences: a Design-Based Approach2012Cambridge University PressNew York
  • T.DunningTransparency, replication, and cumulative learning: what experiments alone cannot achieveAnnu. Rev. Polit. Sci.192820161–28.23
  • EGAPEGAP Learning Days Training Materials, Egap.org2017
  • A.D.FosterM.R.RosenzweigMicroeconomics of technology adoptionAnnu. Rev. Econ.22010395424
  • E.FrankemaAfrica and the green revolution a global historical perspectiveNJAS Wagening. J. Life Sci.7020141724
  • M.GentzkowJ.M.ShapiroCode and Data for the Social Sciences: A Practitioner’s Guide2014University of Chicago mimeohttp://faculty.chicagobooth.edu/matthew.gentzkow/research/CodeAndData.pdf last updated January 2014)
  • A.S.GerberD.P.GreenField Experiments: Design, Analysis, and Interpretation2012W.W. NortonNew York
  • P.J.GertlerS.MartinezP.PremandL.RawlingsC.VermeerschImpact Evaluation in Practice2011World Bank
  • R.GlennersterK.TakavarashaRunning Randomized Evaluations: A Practical Guide2013Princeton
  • R.HausmannD.RodrikA.VelascoGrowth diagnosticsJ.StiglitzN.SerraChap. 15 in The Washington Consensus Reconsidered: Towards a New Global Governance2008Oxford University PressNew York
  • R.HausmannB.KlingerR.WagnerDoing Growth Diagnostics in Practice: A ‘Mindbook.’ Harvard University Center for International Development Working Paper 177, September2008
  • M.HumphreysR.Sanchez de la SerraP.van der WindtFishing, commitment, and communication: a proposal for comprehensive nonbinding research registrationPolit. Anal.2112013120
  • M.HumphreysR.Sanchez de la SerraP.van der WindtSocial Engineering in the Tropics: Evidence from a Field Experiment in Eastern Congo. Working Paper2017Columbia UniversityNew York
  • G.MansuriV.RaoLocalizing Development: Does Participation Work? World Bank Policy Research Report2012
  • E.MiguelC.CamererK.CaseyJ.CohenK.M.EsterlingA.GerberR.GlennersterD.P.GreenM.HumphreysG.ImbensD.LaitinT.MadonL.NelsonB.A.NosekM.PetersenR.SedlmayrJ.P.SimmonsU.SimonsohnM.van der LaanPromoting transparency in social science researchScience343616620143031
  • D.NorthInstitutions, Institutional Change and Economic Performance1990Cambridge University PressCambridge
  • D.RodrikA.SubramanianF.TrebbiInstitutions rule: the primacy of institutions over geography and integration in economic developmentJ. Econ. Growth922004131165
  • D.RodrikThe New Development Economics: We Shall Experiment, but How Shall We Learn? HKS Faculty Research Working Paper Series2008RWP 08-055
  • D.RodrikDiagnostics before prescriptionJ. Econ. Perspect.24320103344
  • G.SchoutenM.VinkS.VellemaInstitutional diagnostics for African food security: approaches, methods and implicationsNJAS Wagening. J. Life Sci.2017
  • O.WilliamsonThe economics of governanceAm. Econ. Rev. Papers Proc.952005118
  • M.WoolcockUsing case studies to explore the external validity of ‘Complex' development interventionsEvaluation192013229248

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.