120
Views
19
CrossRef citations to date
0
Altmetric
Regular articles

Why do people show minimal knowledge updating with task experience: Inferential deficit or experimental artifact?

, , , , &
Pages 155-173 | Received 07 Aug 2006, Accepted 04 Nov 2007, Published online: 05 Jan 2009
 

Abstract

Students generally do not have highly accurate knowledge about strategy effectiveness for learning, such as that imagery is superior to rote repetition. During multiple study–test trials using both strategies, participants' predictions about performance on List 2 do not markedly differ for the two strategies, even though List 1 recall is substantially greater for imagery. Two experiments evaluated whether such deficits in knowledge updating about the strategy effects were due to an experimental artifact or to inaccurate inferences about the effects the strategies had on recall. Participants studied paired associates on two study–test trials—they were instructed to study half using imagery and half using rote repetition. Metacognitive judgements tapped the quality of inferential processes about the strategy effects during the List 1 test and tapped gains in knowledge about the strategies across lists. One artifactual explanation—noncompliance with strategy instructions—was ruled out, whereas manipulations aimed at supporting the data available to inferential processes improved but did not fully repair knowledge updating.

Acknowledgments

This research was supported by a grant from the National Institute on Aging (R37 AG13148), one of the National Institutes on Health.

Notes

1 Henceforth the word “trial” is used to generically refer to study–test trials or to the within-subjects factor. In contrast, the word “list” is used to differentiate the source of paired-associate (PA) items and/or participants' metamemory judgements.

2 These groups were included to evaluate an anchoring hypothesis relevant to understanding individual differences in knowledge updating, which are reported in a separate paper (Hertzog, Price, & Dunlosky, in press). Most important for present purposes, the manipulation did not affect any of our central dependent measures, and hence we collapsed on this independent variable in our reported analyses. We do not discuss it further.

3 Although there are other methods for measuring the accuracy of metacognitive judgements, we limit ourselves to examining absolute accuracy (simple differences between mean judgements and mean recall) because this measure is most relevant to our current aims.

4 A reviewer suggested that the blocking effect could also be due to making the strategy information more salient during testing. Why salience would be greater for blocking than when providing specific information about strategy during test (as in the unpublished experiment we referred to earlier) is unclear, but it cannot be definitively ruled out on the basis of our data.

5 This anchoring hypothesis is consistent with a series of structural equation models conducted on the between-person correlations among these measures that are reported elsewhere (Hertzog et al., Citationin press).

Log in via your institution

Log in to Taylor & Francis Online

There are no offers available at the current time.

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.