Abstract
A good systematic review is often likened to the pre‐flight instrument check—ensuring a plane is airworthy before take‐off. By analogy, research synthesis follows a disciplined, formalized, transparent and highly routinized sequence of steps in order that its findings can be considered trustworthy—before being launched on the policy community. The most characteristic aspect of that schedule is the appraise‐then‐analyse sequence. The research quality of the primary studies is checked out and only those deemed to be of high standard may enter the analysis, the remainder being discarded. This paper rejects this logic, arguing that the ‘study’ is not the appropriate unit of analysis for quality appraisal in research synthesis. There are often nuggets of wisdom in methodologically weak studies and systematic review disregards them at its peril. Two evaluations of youth mentoring programmes are appraised at length. A catalogue of doubts is raised about their design and analysis. Their conclusions, which incidentally run counter to each other, are highly questionable. Yet there is a great deal to be learned about the efficacy of mentoring if one digs into the specifics of each study. ‘Bad’ research may yield ‘good’ evidence—but only if the reviewer follows an approach that involves analysis and appraisal.
Notes
[1] Alas it is impossible to appraise all of these appraisal tools here. For example, another candidate for inspection might be the approach used by the EPPI group (the Evidence for Policy and Practice Information and Co‐ordinating centre, Social Science Research Unit, Institute of Education, University of London), on which there has already been a ferocious barrage of opinion and counter opinion (MacLure, Citation2005; Oakley, Citation2003). I pinpoint the Cabinet Office study for its provenance, because it is a distillation of many previous schemas and, above all, because it is the most clearly, formally and openly articulated.