ABSTRACT
Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of “happy” and “sad” were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of “happy” and “sad” were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an emotion that was incongruent with the channel participants had to focus on, compared with the cross-channel congruence condition. When participants were required to focus on verbal content, accuracy was significantly lower also when prosody was incongruent with verbal content and face. This suggests that prosody biases emotional verbal content processing, even when conflicting with verbal content and face simultaneously. Implications for multimodal communication and language evolution studies are discussed.
Acknowledgements
We thank Nils Kasties, Charlotte Koenen, Sara Letzen, Sandra Linn, and Roland Pusch, for their help in recording the spoken stimuli we adopted in this study. We are grateful to Annett Schirmer for her critical suggestions during the initial stages of experimental design and to Marc D. Pell for sharing the face stimuli. Piera Filippi developed the study concept. Piera Filippi, Dan Bowling, and Sebastian Ocklenburg contributed to the study design. Larissa Heege and Sebastian Ocklenburg performed testing and data collection. Piera Filippi performed data analysis. Piera Filippi drafted the manuscript, and all the other authors provided critical revisions. All authors approved the final version of the manuscript for submission.
Disclosure statement
No potential conflict of interest was reported by the authors.