664
Views
7
CrossRef citations to date
0
Altmetric
Articles

The key role of representativeness in evidence-based education

Pages 43-62 | Published online: 14 Jun 2019
 

ABSTRACT

Within evidence-based education, results from randomised controlled trials (RCTs), and meta-analyses of them, are taken as reliable evidence for effectiveness – they speak to “what works”. Extending RCT results requires establishing that study samples and settings are representative of the intended target. Although widely recognised as important for drawing causal inferences from RCTs, claims regarding representativeness tend to be poorly evidenced. Strategies for demonstrating it typically involve comparing observable characteristics (e.g., race, gender, location) of study samples to those in the population of interest to decision makers. This paper argues that these strategies provide insufficient evidence for establishing representativeness. Characteristics typically used for comparison are unlikely to be causally relevant to all educational interventions. Treating them as evidence that supports extending RCT results without providing evidence demonstrating their relevance undermines the inference. Determining what factors are causally relevant requires studying the causal mechanisms underlying the interventions in question.

Acknowledgements

I thank Adrian Simpson and two anonymous reviewers for helpful feedback on the initial draft of this paper.

Disclosure statement

No potential conflict of interest was reported by the author.

Notes

1. Matching can sometimes be used in conjunction with randomisation. The contrast here is between assignments that use randomisation (with or without matching) and assignments that use only matching.

2. Cartwright (this special issue) explains that RCTs are considered rigorous, in part, because their methodology renders substantive causal knowledge unnecessary.

3. I take this definition to be consistent with how “effectiveness” is used within the majority of EBE literature and I use it throughout the paper. However, it is worth noting that the term is taken to mean something slightly different in the context of effectiveness (vs. efficacy) trials. There, it refers to an intervention’s performance under real-world conditions as opposed to ideal conditions. This usage appears to be more common in clinical research communities than in education research communities, presumably because, unlike medical interventions, trials testing educational interventions are always conducted in real-world settings.

4. For examples, see the Campbell Collaboration, J-Pal, the What Works Clearinghouse, the California Evidence-based Clearinghouse for Child Welfare, and the Bill & Melinda Gates Foundation.

5. For a discussion analysing differences that bear on the use of experimental methods in various domains, see Wrigley and McCusker (this special issue).

6. Phillips (this special issue) provides a detailed account of the dialectic regarding the value of experimental and non-experimental research, especially for purposes of acquiring causal knowledge.

7. Notably, many EBE critics who favour non-experimental educational research agree. For example, see Biesta (Citation2007). For an overview, see Kvernbekk (Citation2016).

8. See Simpson (this special issue), for a discussion of how effect sizes are calculated across studies.

9. As Simpson (this special issue) argues, treating effect size as an indicator of practical significance is a mistake that can misguide decision makers to students’ detriment.

10. Critics challenge the reliability of these methods, especially John Hattie’s popular meta-meta-analyses (e.g., Bergeron & Rivard, Citation2017; Simpson, Citation2017; also see Simpson, this special issue; Wrigley & McCusker, this special issue). While I find their objections compelling, I set them aside to focus on the evidence these methods could provide if credible.

11. For discussion of the type of information relevant to context-centred causal pathways, see Cartwright and Hardie (Citation2012), Joyce and Cartwright (Citation2018), and Munro, Cartwright, Hardie, and Montuschi (Citation2017).

12. Here I mean standard RCTs commonly used for EBE. Some recommend redesigning RCTs so they can answer a wider array of questions (e.g., Bonell, Fletcher, Morton, Lorenc, & Moore, Citation2012; Pawson & Tilly Citation1997), but such proposals are controversial and I cannot address them here.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 235.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.