Abstract
Rapid serial visual presentation (RSVP) tasks have been frequently used to assess attentional control in psychiatric samples; however, it is unclear whether RSVP tasks exhibits the psychometric properties necessary to assess these individual differences. In the current study, we examined the reliability and validity of single-target computerized RSVP task outcomes in a sample of 63 participants with moderate to severe psychiatric illness. At the group level, we observed the classical attentional blink phenomenon. At the individual level, conventional indices of attentional blink magnitude exhibited poor internal consistency. We empirically evaluated a novel index for assessing attentional blink magnitude using a single-target RSVP task that involves collapsing across experimental trials in which the attentional blink phenomenon occurs and disregarding performance on control trials, which suffer from ceiling effects. We found that this new index resulted in much improved reliability estimates. Both novel and conventional indices provided evidence of convergent validity. Consequently, this novel index may be worth examining and adopting for researchers interested in assessing individual differences in attentional blink magnitude.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Correction Statement
This article has been corrected with minor changes. These changes do not impact the academic content of the article.
Notes
1 A summary of the design characteristics and internal consistency of attentional blink magnitude for each of these tasks is presented in Supplementary Table S1.
2 We report the results of Cronbach’s alpha using point-biserial correlations, which is the default input correlation for Cronbach’s alpha when item responses are dichotomous. For those interested, we have also reported a tetrachoric alpha, in which tetrachoric correlations were used as input values, in Supplementary Table S2. However, we focus on Cronbach’s alpha results in the manuscript for two reasons, which are discussed in more detail by Chalmers (Citation2018). First, Cronbach’s alpha does not require continuous item response data, and as such, alternatively forms of alpha are not necessary when data are dichotomous. Second, and perhaps more importantly, alpha coefficients with alternative correlations as input are not equivalent to Cronbach’s alpha and may result in unacceptably liberal sampling error estimates. Therefore, tetrachoric alpha coefficients cannot be interpreted as though it is a standard estimate of reliability.
3 We attempted to compute McDonald’s omega for each index; however, it could not be computed because several variables had a variance of zero and many of the remaining variables were very weakly or negatively correlated.
4 We also examined this association controlling for color naming and word reading completion time. The pattern of results was the same as those presented below.
5 Split-half reliability estimates conducted separately for control trials assigned 100 ms, 200 ms, 300 ms, and 700 ms lags were calculated with only 50 permutations of random splits. Additional permutations led to error messages because multiple variables had a variance of zero. Split-half reliability conducted manually with odd and even trials confirmed that reliability estimates are consistently extremely low (< .12) for control trials at each lag.