Abstract
Studies of implicit learning often examine peoples’ sensitivity to sequential structure. Computational accounts have evolved to reflect this bias. An experiment conducted by Neil and Higham [Neil, G. J., & Higham, P. A.(2012). Implicit learning of conjunctive rule sets: An alternative to artificial grammars. Consciousness and Cognition, 21, 1393–1400] points to limitations in the sequential approach. In the experiment, participants studied words selected according to a conjunctive rule. At test, participants discriminated rule-consistent from rule-violating words but could not verbalize the rule. Although the data elude explanation by sequential models, an exemplar model of implicit learning can explain them. To make the case, we simulate the full pattern of results by incorporating vector representations for the words used in the experiment, derived from the large-scale semantic space models LSA and BEAGLE, into an exemplar model of memory, MINERVA 2. We show that basic memory processes in a classic model of memory capture implicit learning of non-sequential rules, provided that stimuli are appropriately represented.
Notes
1The matrix of word vectors is publicly available as an .rda file at http://www.lingexp.uni-tuebingen.de/z2/LSAspaces/
2Eight (5%) of the 160 words in the training lists and 15 (4.8%) of the 320 words in the test list were not included in the LSA and BEAGLE vectors. To solve the problem we selected category-appropriate semantically similar words as replacements.
3Neil and Higham (Citation2012) reported the number of participants in each study condition as approximately equal (p. 1396); therefore, our estimates are approximate assuming 17 participants in the RA-CC condition and 16 participants in the RC-CA condition.