203
Views
0
CrossRef citations to date
0
Altmetric
Articles

Are efficient learners of verbal stimuli also efficient and precise learners of visuospatial stimuli?

, &
Pages 675-692 | Received 15 Dec 2020, Accepted 17 May 2021, Published online: 31 May 2021
 

ABSTRACT

People differ in how quickly they learn information and how long they remember it, and these two variables are correlated such that people who learn more quickly tend to retain more of the newly learned information. Zerr and colleagues [2018. Learning efficiency: Identifying individual differences in learning rate and retention in healthy adults. Psychological Science, 29(9), 1436–1450] termed the relation between learning rate and retention as learning efficiency, with more efficient learners having both a faster acquisition rate and better memory performance after a delay. Zerr et al. also demonstrated in separate experiments that how efficiently someone learns is stable across a range of days and years with the same kind of stimuli. The current experiments (combined N = 231) replicate the finding that quicker learning coincides with better retention and demonstrate that the correlation extends to multiple types of materials. We also address the generalisability of learning efficiency: A person’s efficiency with learning Lithuanian-English (verbal-verbal) pairs predicts their efficiency with Chinese-English (visuospatial-verbal) and (to a lesser extent) object-location (visuospatial-visuospatial) paired associates. Finally, we examine whether quicker learners also remember material more precisely by using a continuous measure of recall accuracy with object-location pairs.

Acknowledgments

We thank Ruth Shaffer for help with coding the experiments, and Nate Anderson, Hank Chen, and Justin Vincent for helpful comments. We also thank Timothy Lew for providing his code as a blueprint for the object-location portion of the learning efficiency task.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The data for both studies are available at https://osf.io/qznwk/.

Additional information

Funding

This work was supported by grants from Dart Neuroscience and the James S. McDonnell Foundation (awarded to KBM) and by the National Science Foundation Graduate Research Fellowship DGE-1745038 (to CLZ and TS).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.