Abstract
In a recent article, Jamieson and Mewhort (2009) proposed a novel account of artificial grammar learning (AGL), which is based on a multitrace model of episodic memory, the Minerva 2 model. According to this account, test performance in AGL is based on an assessment of global similarity of the test strings to the memory traces of the training strings. In this article, simulation studies are presented, showing for three different AGL experiments that the predictions of the Minerva 2 model strikingly deviate from participants' performance. It is argued that participants' test performance is not generally based on general similarity.
Acknowledgments
This research was supported by grants from the Deutsche Forschungsgemeinschaft (DFG KI-772/2–1 and DFG KI-772/1–3).
Notes
1 Actually, grammaticality and global similarity are positively correlated in most cases. The negative correlation in Knowlton and Squire's (1996) materials is probably an indirect effect of their equalizing grammatical and ungrammatical stimuli in terms of associative chunk strength.
2 In general, the impact of positional violations may be attenuated if a coarse coding scheme as suggested by Dienes Citation(1992) was used. With such a coding scheme a small positional violation (e.g., letter X is on Position 2 when Position 3 is correct) would result in a smaller decrement of the global similarity value than a strong positional violation (e.g., letter X is on Position 2 when Position 6 is correct). However, in the Kinder Citation(2000) test stimuli most positional violations are of the strong type. Thus, with these specific stimuli a coarse coding scheme would not improve Minerva 2's predictions very much.