Abstract
Complex span tasks, assumed by many to measure an individual's working memory capacity, are predictive of several aspects of higher-order cognition. However, the underlying cause of the relationships between “processing-and-storage” tasks and cognitive abilities is still hotly debated nearly 30 years after the tasks were first introduced. The current study utilised latent constructs across verbal, numerical, and spatial content domains to examine a number of questions regarding the predictive power of complex span tasks. In particular, the relations among processing time, processing accuracy, and storage accuracy from the complex span tasks were examined, in combination with their respective relationships with fluid intelligence. The results point to a complicated pattern of unique and shared variance among the constructs. Implications for various theories of working memory are discussed.
Notes
1Two recent studies (Friedman & Miyake, 2004; St. Clair-Thompson, 2007b) have shown that the relationships discussed between processing and storage must be qualified by whether or not the complex span task is participant- or experimenter-administered. When participants are allowed to control the timing of item presentation during complex span tasks, the processing time positively correlates with performance on the storage component, indicating that participants in these test situations are in fact altering their processing performance to engage in rehearsal and other mnemonic strategies (see Engle et al., 1992, for similar effect on word span). Importantly, participant-administered complex span scores and processing times do not correlate with higher-order cognition.
2Given that some participants demonstrated either floor or ceiling performance for the recall scores, we removed these participants (n=8) and reanalysed the data. All of the results were virtually identical to those reported in the paper.
3In addition to measuring internal consistency we also measured test–retest reliability for the recall components for the Rspan and Symspan tasks to compare with a previous estimate of test–retest reliability for the Ospan task (Unsworth et al., 2005b). The correlation from Time 1 to Time 2 for Rspan was .82, and for Symspan was .77 (M time between testing = 49.76 days, Med time between testing = 6 days). These values compare well with the test–retest reliability for Ospan (.83) from Unsworth et al. (2005b). Note that all remaining analyses utilised Symspan and Rspan performance at Time 1 only.
4Because the three complex span tasks were mouse-driven tasks, we also administered a task to assess mouse skill. In this task participants saw a square appear randomly at one of four locations onscreen. Participants were required to click on the square as quickly as possible. Response time and errors (i.e., not clicking directly on the square) were recorded. All analyses were rerun after partialling out potential differences in mouse skill. All results were exactly the same as the reported results.
5Note that the model fits could have been improved if the error variances for each component from the span tasks were allowed to correlate (i.e., all of the errors associated with the Ospan). Doing so did not change any of the parameter values or the relative fit of the models. Thus, the simpler, non-correlated error models were used throughout.
6Using z-score composites led to nearly identical results as using the factor composites.