Abstract
Research into intervention with people with speech and language needs often takes the form of single-case/case series experimental studies (SCEDs) or randomized controlled trials (RCTs). This paper explores the nature of these designs, including their strengths/weaknesses and highlights the value of understanding the intervention outcomes for individual participants. An online survey gathered information on speech and language therapists’ views on their use of the different research designs. We conclude that both research designs are used to inform practice. SCEDs, in particular, are used in developing theories of intervention and informing therapy with individuals. Sound experimental intervention studies of both designs are needed.
Acknowledgments
We should like to acknowledge those colleagues with whom we have discussed and debated the ideas in this paper over many years and particularly Professor David Howard to whom the first and final authors owe so much. We thank Kea Young for implementation of the survey, the anonymous respondents for participating and providing insightful responses, Polly Barr for assistance in compiling the references, and the editor and three anonymous reviewers for insightful comments and suggestions.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes
1 Note that randomized controlled trials and single-case experimental designs for many years were not necessarily in competition. However, there has been an increasing (entirely appropriate) move toward evidence-based practice, and funding of services is often dependent on “evidence.” The widely held view that RCTs provide “evidence” and single-case experimental designs do not (or only provide very weak evidence) prompted this paper.
2 Although originally used to refer to one specific approach to maintaining experimental control within a single-case intervention study (e.g. McReynolds & Kearns, Citation1983) the term is now being used more broadly to encompass other designs which also maintain experimental control (e.g. Howard et al., Citation2015; Nickels et al., Citation2015).
3 It is unclear to what extent the Oxford Centre for Evidence Based Medicine considers that n-of-1 trials are only the highest level of evidence for the particular individual with whom the trial has been conducted, and whether they would also argue (as we do here) that the trial provides high-quality evidence that may be applicable to another individual with the same characteristics.
4 In SCED case series there is an additional statistical requirement to examine the variability across participants that is rarely adhered to (but see, for example Best, Citation2005). While statistical analysis of each participant’s results may show that some participants show significant effects and others do not, it is vital to examine whether there is statistical evidence for variability across participants. For example, Howard (Citation2003) reanalyses data from a SCED case series by Pring, Hamilton, Harwood, and Macbride (Citation1993) and using a homogeneity test (see Leach, Citation1979) determines that there is statistical evidence that the participants with aphasia show different treatment effects (i.e. the effects of treatment are non-homogeneous). Another approach to determining whether there are significant differences in the effects of treatment across participants can be found within mixed effects modeling. In this approach, one can compare models with and without random slopes for participants—if the model that includes random slopes for participants has a better fit of the data than the model without then this indicates that there is evidence that participants show different effects of treatment.