ABSTRACT
In reward-based learning and value-directed remembering, many different value structures for the to-be-remembered information have been used by researchers. I was interested in whether different scoring structures used in a value-directed remembering task impact measures of memory selectivity. Participants studied lists of words paired with point values and some lists included words paired with values ranging from 1 to 20, 1 to 10 (repeating twice), either a high value (10 points) or a low value (1 point), and either a high value (10 points), a medium value (5 points) or a low value (1 point). Results suggest that (1) in tests of free recall, if using a continuous value scale, the range of values matters in terms of selective memory, (2) analysing the selectivity index can yield different results than modelling item-level recall using point values (and the latter may be a preferable approach), (3) measures of selectivity using different value structures may lack construct validity when testing memory via recognition tests, and (4) the effect of value on memory is much larger on recall than recognition tests. Thus, I suggest that researchers carefully consider and justify the value structure used when examining selective memory for valuable information in list learning tasks.
Acknowledgment
I would like to thank Matt Rhodes and Alan Castel for their helpful comments regarding the project and manuscript.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Data availability statement
The data that support the findings of this study are openly available on the Open Science Framework: https://osf.io/7x5b2/?view_only=fbe2622ab7894dfca52067d09dfe7aa9.
Notes
1 The false alarm rate (i.e., instances in which participants incorrectly identified a new word as having been studied) was .14 (SD = .15). False alarm rates did not differ between lists with a high and low value (M = .13, SD = .18), a high, medium, and low value (M = .14, SD = .16), values 1–10 (M = .16, SD = .18), and values 1–20 (M = .14, SD = .16), [F(3, 297) = 1.33, p = .266, = .01].
2 In this analysis, the same subjects provide effect sizes for each value structure (i.e., a within-subject design). This violates the assumption that effect sizes were drawn from independent groups. Thus, I do not further analyze these effect sizes.