Abstract
Recent research in psychology and neuroscience has demonstrated that co-speech gestures are semantically integrated with speech during language comprehension and development. The present study explored whether gestures also play a role in language learning in adults. In Experiment 1, we exposed adults to a brief training session presenting novel Japanese verbs with and without hand gestures. Three sets of memory tests (at five minutes, two days and one week) showed that the greatest word learning occurred when gestures conveyed redundant imagistic information to speech. Experiment 2 was a preliminary investigation into possible neural correlates for such learning. We exposed participants to similar training sessions over three days and then measured event-related potentials (ERPs) to words learned with and without co-speech gestures. The main finding was that words learned with gesture produced a larger Late Positive Complex (indexing recollection) in bi-lateral parietal sites than words learned without gesture. However, there was no significant difference between the two conditions for the N400 component (indexing familiarity). The results have implications for pedagogical practices in foreign language instruction and theories of gesture-speech integration.
Notes
1It should be noted that there is a long tradition of using the body as a tool for second language instruction and learning, such as the Total Physical Response (TPR) technique introduced by Asher (Citation1969). The present research is different from the TPR approach because it does not rely on the whole body, but rather takes advantage of naturally occurring hand movements that accompany speech. Most importantly, because gestures are non-arbitrary, it does not require any special training for the learner or instructor.
2An incongruent stimulus was created by taking a gesture that was congruent with one word (e.g., ‘Nomu/drinking gesture/means drink/drinking gesture/’) and placing it with a different word (e.g., ‘Kiru /drinking gesture/means cut/drinking gesture/’). In this way, the congruent and incongruent conditions both presented the same content in speech and gesture across all words, but the difference was whether the content was congruent or incongruent within a particular word.
3Note that each word is repeated 10 times during the memory test. This repetition likely made it more difficult for participants to accurately recall whether a word was old or new toward the end of the memory test. Although this is not ideal, note that if anything, it would make it more difficult to find significant differences among our conditions. Therefore, our ERP results likely under-represented the effects of gesture on memory in real language learning situations – although the small number of items limits the generalisability of the results.
4From the inspection of the figures, it is clear that the No Instruction condition produced a large P300 effect compared with the Speech and Speech + Gesture conditions. This P300 is likely due to the fact that the ‘new’ items were relatively infrequent (appearing 33% of the time) and were likely viewed as ‘oddball’ stimuli (Key, Dove, & Maguire, Citation2005). In the interest of space, this interesting – but not surprising – effect will not be discussed in the present manuscript.