Abstract
Testing for the presence of a deficit by comparing a case to controls is a fundamental feature of many neuropsychological single-case studies. Monte Carlo simulation was employed to study the statistical power of two competing approaches to this task. The power to detect a large deficit was low to moderate for a method proposed by Crawford and Howell (1998; ranging from 44% to 63%) but was extremely low for a method proposed by Mycroft, Mitchell, and Kay (2002; ranging from 1% to 13%). The effects of departures from normality were examined, as was the effect of varying degrees of measurement error in the scores of controls and the single case. Measurement error produced a moderate reduction in power when present in both controls and the case; the effect of differentially greater measurement error for the single case depended on the initial level of power. When Mycroft et al.'s method was used to test for the presence of a classical dissociation, it produced very high Type I error rates (ranging from 20.7% to 49.3%); in contrast, the rates for criteria proposed by Crawford and Garthwaite (2005b) were low (ranging from 1.3% to 6.7%). The broader implications of these results for single-case research are discussed.
Notes
1 Some readers will realise that it was unnecessary to sample separately from true score and error distributions. For instance, in the example just given, the simulation could have been run simply by sampling from a single normal distribution with a variance of 1.66667 (SD = 1.291). However, in the former approach it is made explicit that we are modelling the effects of varying degrees of measurement error and that the deficit is imposed on the true score (i.e., subtracting 2.0 from the case's score imposes a 2-SD deficit on the true score). In other words, the approach was used for didactic purposes.
2 Note that potentially important deficits will also be missed when Crawford and Howell's Citation(1998) method is used but power is particularly low for Mycroft et al.'s Citation(2002) method.
3 Obtaining a sound inferential method of examining the difference between an individual's standardized scores has proved to be much more difficult than might be anticipated as the problem is one of testing for a difference between two t variates. The RSDT was developed using asymptotic expansion methods and, unlike previously available methods, achieves control of the Type I error rate across all values of the control sample n and the correlation between tasks.