ABSTRACT
Standard neurocognitive models of language processing have tended to obviate the need for incorporating emotion processes, while affective neuroscience theories have typically been concerned with the way in which people communicate their emotions, and have often simply not addressed linguistic issues. Here, we summarise evidence from temporal and spatial brain imaging studies that have investigated emotion effects on lexical, semantic and morphosyntactic aspects of language during the comprehension of single words and sentences. The evidence reviewed suggests that emotion is represented in the brain as a set of semantic features in a distributed sensory, motor, language and affective network. Also, emotion interacts with a number of lexical, semantic and syntactic features in different brain regions and timings. This is in line with the proposals of interactive neurocognitive models of language processing, which assume the interplay between different representational levels during on-line language comprehension.
Acknowledgements
The authors acknowledge Josep Demestre for his helpful comments and Cristina Villalba-García for her help when preparing the Figures. They would also like to thank the reviewers for their insightful comments.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes
1 Valence and arousal scores are typically measured through the Self-Assessment Manikin (Bradley & Lang, Citation1994), which is composed of 9 points accompanied by characters depicting the different anchor points (valence: from extremely negative (1) to extremely positive (9); arousal: from extremely calm (1) to extremely energised (9)). Instead, the studies by Hofmann et al. (Citation2009) and Schacht and Sommer (Citation2009b) used a 7 point scale. Therefore, to allow a direct comparison with the studies by Herbert et al. (Citation2008) and Kissler et al. (Citation2009) we converted ratings from these studies to a 9 point scale.