200
Views
14
CrossRef citations to date
0
Altmetric
Regular Articles

The effect of predictive history on the learning of sub-sequence contingencies

&
Pages 108-135 | Received 17 Jul 2008, Published online: 18 Jun 2009
 

Abstract

Two experiments demonstrated that the prior predictive history of a cue governs the extent to which that cue engages in sequence learning. Using a serial reaction time task, we manipulated the predictiveness of the stimulus locations (cues) with respect to the location of the stimulus on the next trial (outcome), such that half of the cues were good predictors of their outcomes, whilst the other half were poorer predictors. Following this, all cues were then paired with novel outcomes. Learning about those cues that were previously established as good predictors proceeded more rapidly than learning for those cues previously established as poor predictors. When the simple recurrent network is modified to include a variable associability parameter, the effects are easily modelled.

Acknowledgments

We are grateful to Axel Cleeremans, Luis Jiménez, Andy Delamater and two anonymous reviewers for their helpful comments on an earlier draft of this manuscript. We would like to thank the Advanced Research Computing at Cardiff (ARCCA) team for their help in running the model simulations. This work was supported by Grant RES000230983 from the Economic and Social Research Council.

Notes

1 The context loop in the SRN allows the model to learn sequences containing cue-outcome contingencies that span several intervening elements (see Cleeremans, Citation1993), but since the sequences used in the current experiments are created from exclusively first-order transitions, prima facie this functionality of the model might seem redundant. However, with respect to sequence learning, the SRN has received more attention than any other model and therefore seems the most appropriate model to apply to these data. Moreover, including context units will only provide a model with greater flexibility, thus allowing for a better assessment of the ability of a model that does not allow variable cue processing to predict our empirical data. In fact, our parameter search included parameter sets with very low learning rates (e.g., .01) for context–hidden unit connections. Thus situations in which the possible contribution of the context units to learning is minimized (i.e., situations in which the SRN will behave in a manner similar to a standard back-propagation network; Rumelhart, Hinton, & Williams, Citation1986) form a subset of our simulation data. Finally, although second-order information is no more useful than first-order information for the learning of these sequences, this is not to say that second-order information is not present in these sequences. For instance, given the transitions in , sequences such as 1–2–4 (GPH transition followed by GPH transition) will be more common than sequences such as 5–2–4 (PPL transition followed by GPH transition).

2 Since poor predictor cues consistently predict two different outcomes during Stage 1, we would expect the associability of these cues to decrease as often as it increases. However, whilst this is true, since LCR values will be greater for correct than incorrect predictions, positive changes in associability will always be greater than negative changes, and hence we would expect to observe gradual increases in the associabilities of these cues during this stage.

3 Even in animal studies of learned irrelevance that ostensibly only involve a single conditioned stimulus (e.g., Mackintosh, Citation1973), the standard analysis assumes a comparison of predictiveness between this conditioned stimulus and the experimental context, with the latter operating essentially as an additional, simultaneously presented cue.

Log in via your institution

Log in to Taylor & Francis Online

There are no offers available at the current time.

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.