140,610
Views
297
CrossRef citations to date
0
Altmetric
Original Articles

The benefits and challenges of using systematic reviews in international development research

, , &
Pages 445-455 | Published online: 18 Sep 2012

Abstract

Although first applied in the medical sciences in the 1970s, systematic reviews have been recently, and increasingly, used in the field of international development to examine the impacts of a range of development and humanitarian interventions. However, to date, there has been only limited critical reflection on their application within this field. Drawing on the authors' first-hand experiences of conducting eight systematic reviews, this article reflects upon the use of systematic reviews in international development research. It is concluded that although using systematic review principles can help researchers improve the rigour and breadth of literature reviews, conducting a full systematic review is a resource-intensive process which involves a number of practical challenges. Further, it raises a series of fundamental concerns for those working in international development, as well as the social sciences more broadly. Ultimately, systematic reviews should be viewed as a means to finding a robust and sensible answer to a focused research question, but not as an end in themselves.

1. Introduction

The question of ‘what works’ in international development policy and practice is becoming ever more important against a backdrop of accountability and austerity. Donors are under increasing pressure to adopt spending practices that not only generate positive development and humanitarian outcomes, but also represent value for money. As part of this drive towards achieving greater (cost) effectiveness, there has been a surge of interest in ‘evidence-informed policymaking’ – the careful use of empirical evidence in the design and implementation of externally funded policies and programmes in developing countries (DFID Citation2011) – and an associated rise in the use of systematic reviews in development research.

Systematic reviews are a rigorous and transparent form of literature review. Described by Petrosino et al. (cited in van der Knaap et al. Citation2008, p. 49) as ‘the most reliable and comprehensive statement about what works’, systematic reviews involve identifying, synthesising and assessing all available evidence, quantitative and/or qualitative, in order to generate a robust, empirically derived answer to a focused research question. Originally used in the medical sciences in the 1970s to examine the effectiveness of health-care interventions and, more broadly, to support the practice of evidence-based medicine, systematic reviews have since permeated into a wide range of disciplinary fields, from ‘astronomy to zoology’ (Petticrew Citation2001). International development is arguably the latest field to have been introduced to systematic reviews. Increasingly considered a key tool for evidence-informed policymaking, a number of donors – most notably the UK Department for International Development (DFID) and the Australian Agency for International Development (AusAID) – are focusing attention and resources on testing the appropriateness of systematic reviews in assessing the impacts of development and humanitarian interventions. Since 2010, DFID, AusAID and the International Initiative for Impact Evaluation (3ie) have commissioned close to 100 systematic reviews in international development. However, despite growing interest in the use of systematic reviews, to date there has been very little evaluation of the appropriateness of this methodology for the development field.

This article aims to help address this gap by offering critical reflections on the use of systematic reviews within international development research. It draws on the authors' shared experience of conducting eight systematic reviewsFootnote 1 since 2010 on the respective impacts of

microfinance programmes (Duvendack et al. Citation2011)

cash transfers and employment guarantee schemes (Hagen-Zanker et al. Citation2011)

employment creation programmes (Holmes et al. Citation2012)

‘Markets for the Poor’ (M4P) programmes (SLRC Citation2012)

school feeding programmes (SLRC 2012)

seeds-and-tools interventions (SLRC 2012)

social funds (SLRC 2012)

water committees (SLRC 2012)

The article identifies where a systematic review approach adds value to development research and where it may become problematic. While the findings are valid across a broader development context, six of the reviews focus specifically on fragile and conflict-affected situations.

The next section discusses the systematic review methodology in more detail and outlines how it was applied in our eight systematic reviews. Section 3 considers how applying systematic review principles can improve standard literature reviews. Section 4 discusses the specific practical challenges the authors faced before Section 5 raises some more fundamental concerns regarding the use of systematic review in international development. The final section concludes, lists specific policy conclusions and suggests a way forward for using systematic reviews in development research.

2. Background

Used widely in medical research and the natural sciences since the 1970s and early 1980s, systematic reviews are considered a ‘rigorous method to map the evidence base in an [as] unbiased way as possible, and to assess the quality of the evidence and synthesize it’ (DFID Citation2011). Systematic reviews rely upon the use of an objective, transparent and rigorous approach for the entire research process in order to minimise bias and ensure future replicability. Rigour, transparency and replicability are achieved by following a fixed process for all reviews. The fixed process is one of the characteristics that distinguish systematic reviews from traditional literature reviews.

Systematic reviews usually include the following steps: first, the research question is deconstructed by considering population, intervention, outcome and comparator. These form the basis of search strings that are used in the literature search. Then a protocol is produced that describes definitions, search strings, search strategy, inclusion and exclusion criteria and approach to synthesis. This protocol is often peer-reviewed and piloted. This may lead to a revision of the search strategy. Next the systematic search is conducted. Studies are retrieved from academic databases and institutional websites (hand-searching). At this stage, all studies that are found are included. In the next stages, however, all retrieved studies are screened on relevance of title, abstract and full text, by using predefined inclusion and exclusion criteria. Screening is often done by multiple researchers due to the sheer number of studies to be screened. To ensure consistent screening, the screening process is sometimes piloted with all researchers screening the same studies and then comparing the results. Once screening has been completed, the studies that are included in the final analysis are often characterised by intervention, study quality, outcomes, research design and type of analysis. The final stage involves the extraction of relevant quantitative and/or qualitative data, in order to synthesise the evidence. Depending on the field, a meta-analysis is often used to combine and directly compare quantitative results.

Systematic reviews are usually peer-reviewed at different stages in the process. Furthermore, they are often registered with systematic review research networks, for instance, the Cochrane Collaboration in the medicine field or the Campbell Collaboration for reviews in education, crime and justice and social welfare. The objective of registering reviews is to minimise bias of the review, to reduce duplication of effort between groups, to keep systematic reviews updated (PLoS Medicine Editors 2011) and to provide a library with all systematic reviews in the field.

While most systematic reviews apply the steps described above in a fixed and rigid fashion, some of the systematic reviews referred to in this article adopted a more flexible approach by continuing to comply with the core principles of systematic review methodology (rigour, transparency and replicability), while tailoring the protocol as and when required, also beyond the piloting stage. This reflects the fact that systematic reviews do in fact not constitute a homogeneous approach: there are different ‘levels’ of systematic review (see, for example, the work by the Matrix Knowledge GroupFootnote 2 ). Researchers working in various disciplines have previously attempted to make systematic reviews more useful by combining them with other methodological approaches (for example, van der Knaap et al. Citation2008). We will suggest a way forward for how to use systematic reviews in development research in Section 6.

3. How systematic review principles can improve literature reviews

Our shared experience of conducting systematic reviews suggests that adhering to core systematic review principles – rigour, transparency and replicability – can improve the quality and strength of traditional literature reviews in a number of ways. These are first increasing breadth, while retaining focus, second, focusing on empirical evidence, not preconceived knowledge and third being transparent and replicable. These will now be discussed in turn.

Traditional literature reviews are all too often restricted to literature already known to the authors, or literature that is found by conducting little more than cursory searches. This means that the same studies are frequently cited and this introduces a persistent bias to literature reviews. Systematic reviews help reduce implicit researcher bias. Through the adoption of broad search strategies, predefined search strings and uniform inclusion and exclusion criteria, systematic reviews effectively force researchers to search for studies beyond their own subject areas and networks. At the same time, the careful deconstruction of the research question at the outset in terms of population, intervention, comparator and outcome ensures that the review process remains tightly focused. In theory, this improves the likelihood of generating a clearer, more objective answer to the research question.

Likewise, traditional literature reviews in international development research often focus exclusively on results of other studies, without considering study design, data and analytical methods used. In comparison, systematic reviews focus more strongly on evidence, impact, validity and causality. By extracting information on research design (sampling strategy and data collection methods), analytical methods and causal chains, systematic reviews are effective at gauging the robustness of evidence. Classifying the quality and characteristics of impact studies against standardised criteria also enables the possibility of producing cross-study comparisons and meta-analyses, which are valuable for evidence-informed policymaking. In other words, systematic reviews encourage researchers to engage with studies more critically and to be consistent in prioritising empirical evidence over preconceived knowledge.

Finally, the use of a clear systematic review protocol is effective not only in guiding researchers throughout the process – keeping them ‘on track’ – but also in improving the methodological transparency of the review (Gough and Elbourne Citation2002) and in enabling future replication. This is particularly the case if systematic reviews are registered with international research networks, as discussed earlier. Peer review of the protocol and process ensures a further reduction of researcher bias. Furthermore, systematic reviews are able to produce a relatively objective baseline against which future research and evidence on certain interventions can be assessed. This might prove particularly useful for ‘measuring’ the knowledge contribution of a research programme over a number of years.

When systematic review principles are applied sensitively, systematic reviews have a clear advantage over traditional literature reviews. Quality of reviews is improved through transparency, greater breadth of studies included, greater objectivity and reduction of implicit researcher bias, and by encouraging researchers to engage more critically with the quality of evidence. However, systematic reviews are difficult to apply in practice, and entail a number of practical challenges. These are discussed next.

4. Practical challenges when conducting systematic reviews

Despite the added value of a systematic review approach, we encountered a number of practical problems throughout the process. These included the searching, screening and synthesis stages and will now be discussed in turn. Concrete examples will be provided from the systematic review on cash transfers and employment creation (Hagen-Zanker et al. Citation2011).

First of all, systematic reviews require access to a wide range of databases and peer-reviewed journals, which can be problematic and very expensive for non-academic researchers and those based in southern research organisations.Footnote 3 Promoting systematic reviews as best practice, therefore, sits uneasily alongside donors' interests in developing southern research capacity and in encouraging a more inclusive process of evidence building.

Searching institutional websites, for example those of international organisations, is essential to ensure breadth of systematic reviews, as relevant research is often located outside the formal peer-reviewed channels. For example, of the nine studies included in the social funds systematic review (SLRC Citation2012), just two were retrieved from academic journals. However, searching institutional websites undermines the objectivity of the search and retrieval process and introduces bias to the review process. This happens for a number of reasons: differences in websites' search functions mean that search strings have to be either adapted or discarded altogether; and relevant websites may be excluded, whether unintentionally (lack of knowledge) or otherwise (time/resource constraints). This means that potentially high numbers of pertinent studies can be missed.

In order to achieve objectivity, inclusion and exclusion criteria are used to screen potentially relevant studies. However, there is inevitable subjectivity in the screening process, particularly when high numbers of researchers are involved, as each member of the research team interprets inclusion criteria slightly differently. For the systematic review on cash transfers (Hagen-Zanker et al. Citation2011), four different researchers were working on the screening at different stages of the process. To minimise the risk of inconsistent screening, the authors piloted the screening process. For a random selection of 100 studies, the initial disagreement rate in terms of inclusion/exclusion of studies between the two main researchers was 18 per cent, which was reduced to less than 10 per cent after extensive discussions within the team. The authors made similar experiences when conducting other systematic reviews. However, while piloting enables researchers to screen more consistently, there will always be some degree of subjectivity.

In our systematic reviews, we classified all studies included in the final analysis according to research design, methodology, data and assumptions made. However, data and methodology are, in general, poorly described in the development studies literature.

Furthermore, due to time and resource constraints, we had to rely on authors' self-proclaimed research design and results, which introduces another source of bias. In principle, systematic reviews should be backed up with correspondence with the authors of the included studies and subsequent replication and/or reproduction of their results, which is often not feasible due to resource constraints. Duvendack et al. (Citation2011) highlight the need for replication or reproduction, as even results from papers published in top rank peer-reviewed journals may not be reliable. The authors suggest that the availability of raw data to enable replication, repetition and replication in other locations is desirable in assessing quality of studies. However, we suspect that many authors would not be enthusiastic about detailed questioning of their work (see Duvendack (Citation2010) and Duvendack and Palmer-Jones (Citation2011) for their replication experience).

Meta-analysis is rarely possible in the international development field because of the non-availability of data as well as methodological diversity. For example, in Hagen-Zanker et al. (Citation2011), even restricting studies to money-metric measures of poverty still left too much variation in terms of methodology and indicator used. The choice of indicator clearly affected the impacts identified and comparisons across studies using different indicators were not meaningful. The range and inconsistency of methodological approaches adopted make it difficult to draw meaningful conclusions. Furthermore, complex interventions tend to generate multiple outcomes which systematic reviews may not be able to capture (Boaz et al. Citation2002).

Information on statistical significance – a basic but important indicator in quantitative impact evaluation – is missing from many studies in the development field. For example, in the Hagen-Zanker et al. (Citation2011) review, only 16 out of 37 studies included information on statistical significance. This renders meta-analysis or other robust forms of synthesis unfeasible.

However, a meta-analysis is not impossible in the international development field. A forthcoming systematic review on microcredit impact and women's empowerment by Vaessen et al. (forthcoming) has managed to conduct a meta-analysis including studies that are conceptually and methodologically diverse. The challenges and limitations of doing this are discussed in Duvendack et al. (Citation2012).

Finally, our systematic reviews did not generate the practical policy recommendations anticipated. Due to the often low number of studies, inconsistency of methodological approaches and lack of meta-analysis, the findings were often too broad, too incomparable and too research-oriented. For example, in the systematic review on cash transfers and employment guarantee schemes (Hagen-Zanker et al. Citation2011), the main conclusions were about the lack of methodological consistency, quality of the studies reviewed and research gaps, but we could not draw any conclusions on the relative efficacy of the two interventions. Similarly, Duvendack et al. (Citation2011) conclude that the majority of the microfinance impact evaluations they examined suffer from weak methodologies and thus recommend better research to get a clearer picture of how and for whom microfinance interventions actually work. Hence, the findings of these reviews are of greater interest to an academic rather than a policy audience.

Ultimately, the systematic review process is extremely resource intensive. Using a rigid systematic review procedure is an extremely demanding and time-consuming process, in part because of the high number of studies that are often assessed at the first stage of screening. shows the number of studies included in the different stages of our eight systematic reviews. In the initial stages, we have had to screen up to 24,263 studies, even after duplicates were removed. For most of our systematic reviews, even at the full-text screening stage, hundreds of articles still had to be screened. Since doing a systematic review properly implies following the same protocol for each study, each article had to be thoroughly screened and assessed – no short cuts could be taken. This is time and resource intensive. For example, the Hagen-Zanker et al. (Citation2011) systematic review took about 12 months to complete and at least double the time as originally anticipated. These realities – taking a very specific and focused question and investing significant time and resources – do not sit well with donors' high expectations and desire for breadth in research and often with short timescales for delivery.

Table 1. Number of studies included in our systematic reviews

In addition to the practical difficulties outlined above, the use of systematic reviews in international development research throws up a series of deeper dilemmas. These will be discussed next.

5. Fundamental concerns

Based on our experience, we have a number of fundamental concerns related to the use of systematic reviews in international development research. First, there is an inherent contradiction between the information required to conduct a systematic review and the way peer-reviewed journal articles are written in development studies. Second, it may be much harder to assess evidence in development studies, compared to other fields in which systematic reviews were pioneered. Finally, systematic reviews miss context and process, which are very important in international development research. These concerns will now be discussed in more detail.

Empirical impact studies in development studies are not written in a uniform fashion, unlike in the natural and medical sciences or even compared to economics. This is problematic from a practical perspective: unclear titles and vague, unstructured abstracts make it more difficult to accurately assess the relevance of a study on the basis of a title or abstract alone. But there is also a more fundamental concern: the attributes that get research published in a peer-reviewed development journal are very different to those required for inclusion in a systematic review. Systematic review inclusion criteria demand a high level of detail on method, data and impact that many peer-reviewed articles either do not contain for lack of space or forego in favour of deeper explorations of historical context. Peer-reviewed journals, therefore, may not be the most appropriate sources for systematic review study retrieval. However, on a more positive note, in other fields the use of systematic reviews has encouraged the use of clearer titles and abstracts, and many journals are now providing a more detailed methodology online (Sandy Oliver, personal communications). This is a promising development.

That said, development researchers should be concerned about the way in which systematic reviews tend to grade evidence. Systematic reviews were pioneered in the natural sciences where the predominant methodologies are quantitative. These can be assessed in a relatively straightforward (although not entirely unproblematic) fashion using pre-existing methodological quality scales, such as the Maryland Scale of Scientific Methods, the Scottish Intercollegiate Guidelines Network (SIGN) and the World Cancer Research Fund. These scales are applicable for quantitative studies and require an assessment of how well the studies have been executed as well as the quality of their research design. In international development research, however, much of the research is and should be multi-disciplinary, hence also including qualitative methodologies. This poses a challenge because quality appraisal techniques for assessing qualitative studies lack consensus and are still underdeveloped. It is not yet clear how well systematic reviews are able to compare qualitative with quantitative methodologies and findings. The Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI-Centre) has done some work on combining qualitative and quantitative research in systematic reviews (see Thomas et al. Citation2004), but this issue still needs to be explored further.

There are many research questions of qualitative nature that are inappropriate for a systematic review approach. The challenges of assessing qualitative evidence, however, could mean that systematic reviews continue to focus more strongly on quantitative studies and measurable outcomes than they would otherwise. Randomised controlled trials (RCTs) are considered by many to be the ‘gold standard’ of development research, but there should be a place for all kinds of research. As the situation currently stands, the kinds of ‘hierarchies of evidence’ adopted and promoted by systematic reviews not only discount non-positivist science, effectively pushing research that does not set out to prove/disprove hypotheses into a dark corner, but also lean towards studies that investigate measurable outcomes.

Given the rising importance and prominence of systematic reviews in policymaking, this may have serious long-term policy implications if donors become unwilling to fund interventions that generate less tangible, more difficult-to-measure outcomes (such as those that aim to strengthen community cohesion or build state–citizen relations). Therefore, while efforts have been made outside the field of international development to make systematic reviews more inclusive of qualitative evidence (for example, the Cochrane Collaboration's qualitative methods network; see also Spencer et al. (Citation2003), Petticrew and Roberts (Citation2006)), this remains a challenging area that requires greater attention (Dixon-Woods and Fitzpatrick Citation2001).

Quantitative methodologies adopted in the natural sciences aim to measure impact and causality by controlling for confounding factors. However, ‘cutting out the noise’ risks missing the point in international development research (and the social sciences more broadly), where context is the primary consideration. By privileging impact studies that are fixated with achieving internal validity, systematic reviews generate only partial findings and a skewed picture of reality. It is only through analyses of political economy, social relations and institutions that we gain a fuller understanding of why particular interventions work in particular environments at particular times. Similarly, while investigations into causality and impact are undeniably vital, understanding process and the internal dynamics of interventions is just as important. Outcomes are ultimately shaped by programme design and delivery, as well as by context (see Pawson and Tilley (Citation1997) on context–mechanism–outcome configurations), and many systematic reviews do not help us understand these dimensions. In other words, the question of why things work is just as policy relevant as whether or not they do in the first place (Gough et al. Citation2012). Thus, research questions should not simply be reduced to the ‘pragmatics of technical efficiency and effectiveness’ (Evans and Benefield Citation2001, p. 539).

6. Conclusions and recommendations

Systematic reviews are a new tool in international development research and have the potential to enhance and promote evidence-informed policymaking, particularly in areas with a strong and well-developed evidence base. When systematic review principles are applied sensitively, systematic reviews have a clear advantage over traditional literature reviews. But they may not be as objective as they appear, and their strengths must be balanced against a number of practical and fundamental limitations. For example, because international development is a multi-disciplinary field in nature, it may be much harder to assess evidence compared to the fields in which systematic reviews were pioneered.

There is a need to adapt the methodology to make systematic reviews work for international development and humanitarian research, and finding ways to achieve this will only happen through experimentation with the process. Ultimately, systematic reviews should be seen as a means to an end – helping to get a robust and sensible answer to a focused research question – and not an end in themselves (Lichtenstein et al. (Citation2008) drew a similar conclusion on the use of systematic reviews in the field of nutrition).

Rather than following a rigid systematic review methodology, our shared experience suggests that a more useful approach for development researchers might involve a mixture of compliance and flexibility: compliance with the broad systematic review principles (rigour, transparency and replicability) and flexibility to tailor the process towards improving the quality of the overall findings, particularly if time and budgets are constrained. In short, we should be focusing on the utility that can be gained from a systematic review approach rather than its rigid application.

Beyond these general reflections, we offer five specific conclusions: first, applying systematic review principles to a literature review is highly valuable as it increases breadth, improves transparency and emphasises the importance of empirical evidence over preconceived knowledge. Second, systematic reviews can be used to identify knowledge gaps and highlight methodological inconsistencies and weaknesses; they are therefore useful in identifying future research priorities. Third, full systematic reviews are expensive: researchers and donors need to consider whether the full application of a rigid systematic review approach is justified in relation to the time and resources required. However, they are considerably cheaper than impact evaluations (Snilstveit and Waddington Citation2012). Fourth, systematic review methodology can be adjusted or developed (see, for example, the work of van der Knaap et al. (Citation2008)) if it helps to get a more useful answer to the research question. Finally, more work is needed to find better ways to assess qualitative research and compare it with quantitative work (Dixon-Woods and Fitzpatrick Citation2001, Thomas et al. Citation2004).

Notes

This article is based on a briefing paper published by Hagen-Zanker et al. (Citation2012).

1. A number of systematic reviews may eventually be published as ‘systematic maps’ due to the low number of relevant studies included in the final analysis.

3. For researchers based at the Overseas Development Institute, this meant applying for a visitor membership at the London School of Economics library, but this was only possible due to our proximity to the library.

References

  • Boaz , A. , Ashby , A. and Young , K. 2002 . Systematic reviews: what have they got to offer evidence based policy and practice? , London : UK Centre for Evidence Based Policy and Practice . Working Paper 2
  • DFID, 2011. Feature: systematic reviews in international development: an initiative to strengthen evidence-informed policy making [online]. http://www.dfid.gov.uk/r4d/SystematicReviewFeature.asp (http://www.dfid.gov.uk/r4d/SystematicReviewFeature.asp) (Accessed: 22 September 2011 ).
  • Dixon-Woods, M. and Fitzpatrick, R., 2001. Qualitative research in systematic reviews. British medical journal [online], 323 (7316), 765–766. http://www.bmj.com/content/323/7316/765.short (http://www.bmj.com/content/323/7316/765.short) (Accessed: 11 September 2011 ).
  • Duvendack, M., 2010. Smoke and mirrors: evidence from microfinance impact evaluations in India and Bangladesh. Thesis (PhD). University of East Anglia. https://ueaeprints.uea.ac.uk/19437/1/Maren_Duvendack_Smoke_and_Mirrors_PhD_Sept2010.pdf (https://ueaeprints.uea.ac.uk/19437/1/Maren_Duvendack_Smoke_and_Mirrors_PhD_Sept2010.pdf) (Accessed: 11 September 2011 ).
  • Duvendack , M. and Palmer-Jones , R. 2011 . High noon for microfinance impact evaluations: re-investigating the evidence from Bangladesh , Norwich : The School of International Development, University of East Anglia . Working Paper 27, DEV Working Paper Series
  • Duvendack , M. 2011 . What is the evidence of the impact of microfinance on the well-being of poor people? , London : EPPI-Centre, Social Science Research Unit, Institute of Education, University of London .
  • Duvendack , M. 2012 . Assessing ‘what works’ in international development: issues and methods of risk of bias and meta-analysis in development interventions . Journal of development effectiveness. , 4 ( 3 ) : 456 – 471 .
  • Evans , J. and Benefield , P. 2001 . Systematic reviews of educational research: does the medical model fit? . British education research journal , 27 ( 5 ) : 527 – 541 .
  • Gough , D. and Elbourne , D. 2002 . Systematic research synthesis to inform policy, practice and democratic debate . Social policy and society , 1 ( 3 ) : 225 – 236 .
  • Gough , D. , Oliver , S. and Thomas , J. 2012 . “ Moving forward ” . In Introduction to systematic reviews , Edited by: Gough , D. , Oliver , S. and Thomas , J. London : Sage .
  • Hagen-Zanker , J. , McCord , A. and Holmes , R. 2011 . The impact of employment guarantee schemes and cash transfers on the poor , ODI systematic review .
  • Hagen-Zanker, J., et al., 2012. Making systematic reviews work for international development research [online]. Secure Livelihoods Research Consortium Briefing Paper 1. www.odi.org.uk/slrc (http://www.odi.org.uk/slrc) (Accessed: 9 January 2012 ).
  • Holmes , R. , McCord , A. and Hagen-Zanker , J. 2012 . What is the evidence of the impact of employment creation on (a) stability and (b) poverty reduction in fragile states? , ODI systematic review .
  • Lichtenstein , A.H. , Yetley , E.A. and Lau , J. 2008 . Application of systematic review methodology to the field of nutrition . Journal of nutrition , 138 : 2297 – 2306 .
  • Pawson , R. and Tilley , N. 1997 . Realistic evaluation , London : Sage .
  • Petticrew , M. 2001 . Systematic reviews from astronomy to zoology: myths and misconceptions . British medical journal , 322 ( 7278 ) : 98 – 101 .
  • Petticrew , M. and Roberts , H. 2006 . Systematic reviews in the social sciences: a practical guide , Oxford : Blackwell Publishing .
  • SLRC (Secure Livelihoods Research Consortium) . 2012 . The impacts of five development interventions in fragile and conflict-affected situations , SLRC systematic review .
  • Snilstveit, B. and Waddington, H., 2012. Systematic reviews: let's work out the kinks [online], 3ie. http://www.3ieimpact.org/ (http://www.3ieimpact.org/) (Accessed: 9 January 2012 ).
  • Spencer , L. 2003 . Quality in qualitative evaluation: a framework for assessing research evidence , London : National Centre for Social Research .
  • The PLoS Medicine Editors . 2011 . Best practice in systematic reviews: the importance of protocols and registration . PLoS medicine , 8 : 2
  • Thomas , J. 2004 . Integrating qualitative research with trials in systematic reviews . British medical journal , 328 : 1010 – 1012 .
  • Vaessen , J. forthcoming . The effects of microcredit on women's control over household spending in developing countries , 3ie systematic review .
  • van der Knaap , L.M. 2008 . Combining Campbell standard and the realist evaluation approach: the best of two worlds? . American journal of evaluation , 29 ( 1 ) : 48 – 57 .