596
Views
4
CrossRef citations to date
0
Altmetric
Regular Articles

Evidence for children’s online integration of simultaneous information from speech and iconic gestures: an ERP study

ORCID Icon, , , , , & show all
Pages 1283-1294 | Received 15 Jun 2019, Accepted 12 Feb 2020, Published online: 22 Mar 2020
 

ABSTRACT

Children perceive iconic gestures, along with speech they hear. Previous studies have shown that children integrate information from both modalities. Yet it is not known whether children can integrate both types of information simultaneously as soon as they are available (as adults do) or whether they initially process them separately and integrate them later. Using electrophysiological measures, we examined the online neurocognitive processing of gesture-speech integration in 6- to 7-year-old children. We focused on the N400 event-related potential component which is modulated by semantic integration load. Children watched video clips of matching or mismatching gesture-speech combinations, which varied the semantic integration load. The ERPs showed that the amplitude of the N400 was larger in the mismatching condition than in the matching condition. This finding provides the first neural evidence that by the ages of 6 or 7, children integrate multimodal semantic information in an online fashion comparable to that of adults.

Acknowledgements

This study was supported by the European Commission, Marie Curie Actions (H2020-MSCA-IF-2015) given to the first author. We would like to thank our actor for her patience in creating the video stimuli and our student assistants who helped during data collection. We further want to express gratitude to Nick Wood at the Max Planck Institute for Psycholinguistics, who has since passed away, for technical assistance and support in editing our stimuli. We also thank Linda Drijvers who offered her assistance and knowledge.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 In this paper, we used terms “matching'” and “mismatching” to indicate cases where gesture and speech refer to similar or very different referents, in line with other N400 studies used in spoken language comprehension (e.g. Drijvers & Özyürek, Citation2017, Citation2018). Please note that other gesture studies have also used these terms but in a differently way from the current study. Goldin-Meadow and her colleagues (e.g. Goldin-Meadow, Citation2003) used the terms “matching” and “mismatching” to indicate whether gesture and speech semantically represent same or different aspects of the same referent, respectively.

Additional information

Funding

This study was supported by the European Commission, Marie Curie Actions [H2020-MSCA-IF-2015] given to the first author.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 444.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.