251
Views
1
CrossRef citations to date
0
Altmetric
REGULAR ARTICLE

The role of co-speech gestures in retrieval and prediction during naturalistic multimodal narrative processing

, , &
Pages 367-382 | Received 08 Mar 2023, Accepted 10 Dec 2023, Published online: 19 Dec 2023
 

ABSTRACT

During daily communication, visual cues such as gestures accompany the speech signal and facilitate semantic processing. However, how gestures impact lexical retrieval and semantic prediction, especially in a naturalistic setting, remains unclear. Here, participants watched a naturalistic multimodal narrative, where an actor narrated a story and spontaneously produced co-speech gestures. For all content words, word frequency and lexical surprisal were regressed against the EEG using temporal response functions (TFRs), which were fitted separately, additively, and interactively for words accompanied and not accompanied by gestures. Results from our analyses suggest a robust modulation effect of gesture on the frequency-dependent regression N400. Besides, we also observed some evidence of modulative effect of gesture on the surprisal-N400 effect based on the single-predictor model. Our finding thus suggests that, on a neural level, the presence of co-speech gestures facilitates lexical retrieval and potentially semantic prediction during the processing of naturalistic multimodal stimuli.

Acknowledgements

SO implemented EEG preprocessing and data analysis scripts, implemented multivariate regression analyses, created figures, interpreted results, and wrote and edited the manuscript. BS designed the experiment, acquired funding, and reviewed the manuscript. LM implemented multivariate regression analyses, GPT2 lexical-surprisal analyses, interpreted results, wrote, reviewed, and edited the manuscript. YH designed the experiment, acquired data, created figures, interpreted results, and wrote, reviewed, and edited the manuscript.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

Data will be made available upon request by contacting YH at [email protected].

Additional information

Funding

This project was funded by the Deutsche Forschungsgemeinschaft (DFG), funding number HE8029/2-1, the von Behring-Röntgen-Stiftung (funding number 59-0002, 64-0001), and the Excellence Program “The Adaptive Mind” of the Hessian Ministry of Higher Education. SO received funding from Agencia Nacional de Investigación y Desarollo (ANID), national grant for doctoral studies N° 21181786.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.