271
Views
13
CrossRef citations to date
0
Altmetric
Regular articles

The effectiveness of feedback in multiple-cue probability learning

, , &
Pages 890-908 | Received 02 May 2006, Published online: 09 Apr 2009
 

Abstract

How effective are different types of feedback in helping us to learn multiple contingencies? This article attempts to resolve a paradox whereby, in comparison to simple outcome feedback, additional feedback either fails to enhance or is actually detrimental to performance in nonmetric multiple-cue probability learning (MCPL), while in contrast the majority of studies of metric MCPL reveal improvements at least with some forms of feedback. In three experiments we demonstrate that if feedback assists participants to infer cue polarity then it can in fact be effective in nonmetric MCPL. Participants appeared to use cue polarity information to adopt a linear judgement strategy, even though the environment was nonlinear. The results reconcile the paradoxical contrast between metric and nonmetric MCPL and support previous findings of people's tendency to assume linearity and additivity in probabilistic cue learning.

Acknowledgments

The support of the Economic and Social Research Council (ESRC) and The Leverhulme Trust is gratefully acknowledged. The work was part of the programme of the ESRC Research Centre for Economic Learning and Social Evolution, University College London. We thank Joshua Klayman, Nigel Harvey, David Lagnado, and Henrik Olsson for valuable feedback on an earlier version of this article.

Notes

1 The slope of the line for the PFB group in suggests that asymptotic performance had not been reached. To investigate this possibility we conducted an experiment with extended training (three sessions of 240 trials over a 3-day period). Although a marginally significant effect indicating improvement across sessions was found, there was little suggestion that performance exceeded the .80 level achieved at the end of 240 trials in Experiment 1.

2 To see why, imagine that a participant gave an estimate of 25% (.25) on all 32 trials on which the presented pattern had a probability of increase below .50 (and thus assigned a 0 in Step 1 above). This would mean the participant made a correct estimate (below .50) on all the patterns on which the objective probability was indeed below .50, giving a score of 32/32 or 1.0. The calculation in Step 3 would then be 32[(1 – .5) × (1 – .5)] = 8. Now imagine the same participant assigned the estimate 75% (.75) on all 32 trials on which the presented pattern had a probability of increase above .50 (thus assigned a 1 in Step 1). By the same logic as above we would have the identical calculation in Step 3, giving a DI of 8 for the 75% estimate. The summed DI for this participant would be 16, which when divided by the number of trials (64) gives the perfect mean DI of .25. Note that any estimate that is consistently below 50% for all patterns with an objective probability below .50 and consistently above 50% for an objective probability above .50 will lead to the same final DI of .25. This is why the measure reveals discrimination between occasions when the share price will go up or not, as compared to calibration of the participant either side of that “binary” prediction.

Log in via your institution

Log in to Taylor & Francis Online

There are no offers available at the current time.

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.