ABSTRACT
Evaluation, a product of the movement for evidence-based policy, is a key step in policy cycles. However, many studies existing in sport policy literature have failed to address underpinning methodologies in a rigorous manner and to provide justification for the use of certain measures handpicked by evaluators. As yet, no study has explicitly reflected on the value of evaluation or has systematically discussed how mainstream evaluation theories have been used in sport policy studies. Such articulation is necessary in order to provide researchers with additional resources for making informed and strategic methodological choices and to ensure the quality of their analysis. Thus, this article discusses the development of evaluation in general and examines the existing literature on sport policy evaluation. It then goes on to outline four especially noteworthy public policy evaluation frameworks: experimental design, constructivist evaluation, utilisation-focused evaluation and realist evaluation. Next, it uses a specific example to highlight the strengths of realist evaluation as a tool for unpacking additionality and understanding the logic of theory. In conclusion, the article suggests using theory-based evaluation frameworks (specifically, realist evaluation) to inform sport policy failure or success for future sport evaluation research.
Acknowledgements
I would like to thank the anonymous reviewers for their insightful comments and suggestions. I would also like to thank Professor Ian Henry for guiding me in the field of sport policy evaluation and for his invaluable support.
Disclosure statement
No potential conflict of interest was reported by the author.
Notes
1. Nutley et al. (Citation2007) work ‘Using Evidence’ and Davies et al.’s (Citation2000) book ‘What works – Evidence-based policy and practice in public services’ have provided some interesting discussion of the ways in which policymakers use evidence to support their decisions and of how they judge different types of evaluations.
2. Potential bias may arise here as a result of programme stakeholders’ control over evaluations in terms of what is looked at and how the data are interpreted, thereby compromising the scientific credibility of the evaluation. As noted by Stufflebeam and Coryn (Citation2014), if the evaluator insists on compliance with professional standards of evaluation, the stakeholders may not welcome findings. To strike a balance between scientific creditability and fulfilling stakeholders’ interests, Patton (Citation1997) suggested that the utilisation-focused evaluation methods should be carried out by highly competent and confident evaluators who have strong negotiation skills.
3. Readers may access more resources concerning realist evaluation and realist syntheses via the RAMESES projects (http://www.ramesesproject.org/Home_Page.php), including the quality standards and training materials developed for both approaches.
4. It is worth noting here that realist evaluation takes a realism philosophical approach that is different to some other forms of realism (e.g. critical realism and fallibilistic realism) (Pawson and Tilley Citation1997, Pawson Citation2013). Pawson argues that realist evaluation places greater emphasis on the notion of understanding the mechanics of scientific explanation and seeks to develop realism as an empirical method. In contrast, although the critical realism of Bhaskar, for example, does also agree on the importance of the classic apparatus of empirical science – such as clear conceptualisation and hypothesis-making and so on – his approach tends to focus on understanding aspects of social conditions, structures and causal powers.
5. Programme participants were grouped into three types according to their exercise intensity levels: Type 1 – people who were new to sport and physical activity prior to the staging of the programme, Type 2 – people who had participated in sport and physical activity but relatively less often (1–3 days a week) prior to the staging of the programme and Type 3 – People who had regularly participated in sport and physical activity (for more than 4 days a week) prior to the staging of the Workplace Challenge programme
6. Here, scientific realist evaluation, as explained by Pawson and Tilley (Citation1997), means the conducting of evaluations under the banner of realism.
Additional information
Notes on contributors
Shushu Chen
Dr Shushu Chen is a Lecturer in Sport Policy and Management at University of Birmingham. Her principal research interests lie in the field of sport policy and management, particularly focusing on Olympic legacy evaluation and sport policy evaluation.