ABSTRACT
An innovative single-case crossover design containing multiple forms of randomization was implemented with eight participants in seven weekly sessions, during which instruction was given in the use of two different pictorial mnemonic (memory-enhancing) strategies: one designed to improve the children’s learning of the dates of various inventions and the other designed to improve the children’s acquisition of unfamiliar vocabulary items. A composite randomization statistical test revealed that when compared with the children’s own preferred learning methods, the mnemonic-strategy approach produced the predicted facilitation effects. At the same time, it was evident that mnemonic instruction enhanced children's performance to a greater extent on the vocabulary task than on the inventions task. In-depth examination of both individual student performance profiles and the tasks/procedures were conducted, yielding recommendations and challenges for follow-up single-case intervention research on the topic.
Acknowledgments
We are grateful to the staffs and students at East K-7 and West K-7 schools in Holland, Michigan, whose cooperation throughout the study’s execution was invaluable. Thanks to Boris Gafurov for his assistance with the ExPRT program’s graphing routines. The constructive revision suggestions by reviewers of an earlier version of the manuscript improved the quality of the present article.
Declaration of interest
The authors have no conflicts of interest related to any aspects of the research reported here.
Notes
1 If random selection of the same intervention start point for different cases is not allowed (i.e., if the start points are sampled without replacement), so that there is no resulting start point overlap among different cases (e.g., Koehler & Levin, Citation1998, Table 6; Levin, Ferron, & Gafurov, Citation2015), then a replicated AB design and a multiple-baseline design are functionally identical.
2 It is worth noting that for technical reasons, the effect sizes reported in the ExPRT statistical package used to perform the randomization analyses for our crossover-design variation and later presented here are half as large as the corresponding d values used in Levin et al.’s (Citation2014) power calculations (for discussion, see Levin, Ferron, & Gafurov, p. 25). Thus, the d of 1.00 provided here corresponds to an ExPRT-calculated d of .50. It should be additionally mentioned that the Levin et al. “short series” simulations were based on individual cases rather than on case pairs, with the latter (i.e., inventions and vocabulary test scores) comprising the primary analysis focus in the present investigation.
3 Levin, Ferron, and Gafurov, (Citation2014) provide a comprehensive rationale and statistical power support for single-case randomization tests of this kind.
4 For purposes of ExPRT’s randomization test, this expression can be rewritten as: (MI-CV) + (MV-CI) > 0.
5 Although the current version of ExPRT does not provide a d effect-size measure for the individual (i.e., nonpaired) application of the AB crossover design, the interested user could readily obtain one by dividing the across-cases average phase difference by some standard deviation of choice (e.g., Phase A, Time 1, a pooled measure). The Busk and Serlin (Citation2005) d provided in ExPRT is a descriptive measure that makes no distributional assumptions. Shadish et al. (Citation2014) have developed a single-case effect-size statistic that is comparable to a conventional “group” d statistic; and for various measures of between-phase non-overlap, see Parker et al., (Citation2014).