ABSTRACT
Individuals with Alzheimer’s disease have been found to present a typical serial position curve in immediate recall tests, showing poor primacy performance and exaggerated recency recall. However, the recency advantage is usually lost after a delay. On this basis, we examined whether the recency ratio (Rr), calculated by dividing recency performance in an immediate memory task by recency performance in a delayed task, was a useful risk marker of cognitive decline. We tested whether change in Mini-Mental State Examination (MMSE) performance between baseline and follow-up was predicted by baseline Rr and found this to be the case (N = 245). From these analyses, we conclude that participants with high Rr scores, who show disproportionate recency recall in the immediate test compared to the delayed test, present signs of being at risk for cognitive decline or dysfunction.
Acknowledgements
We wish to thank John J. Sidtis, Antero Sarreal, Hernando Raymundo, and Vita Pomara for their help with data collection.
Conflict of interest
There are no conflicts of interest to declare.
Notes
1 Although beyond the scope of this paper, it is interesting to speculate how the same pattern of results might be explained by models of memory that do not postulate a distinction between short- and long-term memory stores (e.g., Brown, Neath, & Chater, Citation2007; Tan & Ward, Citation2000). Most single storage models emphasize the importance of item distinctiveness in retrieval, such that more distinctive items are more likely to be retrieved. Therefore, one could perhaps argue that in AD subjects, temporal distinctiveness is important in immediate memory tasks, as recency is preserved, but then abandoned or underused in delayed tasks, perhaps due to impaired processing of, or limited ability to use, temporal information (e.g., see also Howard, Fotedar, Datey, & Hasselmo, Citation2005).