Abstract
Jamieson and Mewhort (Citation2009b) proposed an account of performance in the artificial-grammar judgement-of-grammaticality task based on Hintzman's (Citation1986) model of retrieval, Minerva 2. In the account, each letter is represented by a unique vector of random elements, and each exemplar is represented by concatenating its constituent letter vectors. Although successful in simulating several experiments, Kinder (Citation2010) showed that the model fails for three selected experiments. We track the model's failure to a constraint introduced by concatenating letter vectors to construct the exemplar representation. To fix the problem, we use a holographic representation. Holographic representation not only provides the flexibility missing with the concatenation scheme but also acknowledges variability in what subjects notice when they inspect training exemplars. Armed with holographic representations, we show that the model successfully captures the three problematic data sets. We argue for retrospective accounts, like the present one, that acknowledge subjects' skill in drawing unexpected inferences based on memory of studied items against prospective accounts that require subjects to learn statistical regularities in the training set in anticipation of an undefined classification test.
The research was supported by Discovery Grants from the Natural Sciences and Engineering Research Council of Canada. We thank Michael N. Jones for comments and discussion.