ABSTRACT
Brain-imaging studies report that separate neural correlates are associated with processing of different types of humorous materials. However, such evidence lacks temporal information. In this study, we examined the temporal dynamics of humour comprehension between two types of jokes: semantic (SEMs) and pun (PUNs) jokes, using electroencephalographic (EEG) techniques. Thirty SEMs and 30 PUNs were presented to 16 healthy subjects, and their EEG data were concurrently recorded. PUNs consequently showed a larger N400 amplitude than did SEMs, without a specified scalp site, which implies that PUNs induce greater surprise and semantic violation. Meanwhile, SEMs induced a larger P600-like amplitude at the posterior site, which implies that, in order to understand SEMs, higher working memory loads are needed to form novel associations and successfully frame-shift. A possible explanation is the differing logical mechanisms used to understand SEMs and PUNs: the former builds on semantic relationships, the latter on phonological causality.
KEYWORDS:
Acknowledgements
We would like to express our appreciation to Prof. Linden Ball and Dr. Esther Fujiwara, the editors of the Journal of Cognitive Psychology, for their numerous suggestions regarding the constructions and inferences of our research work, and to anonymous reviewers for their valuable comments on earlier versions of this article.
Disclosure statement
No potential conflict of interest was reported by the authors.
ORCID
Yi-Tzu Chang http://orcid.org/0000-0002-1474-2406
Ching-Lin Wu http://orcid.org/0000-0001-8211-0446
Hsueh-Chih Chen http://orcid.org/0000-0003-2043-0190
Notes
1 Garden-path jokes: A garden-path (GP) joke is a joke for which the first dominant interpretation is that the joke is incoherent, but this is subsequently substituted by a hidden joke interpretation. Two important factors for the processing of GP jokes are salience of the initial interpretation and the accessibility of the hidden interpretation. Both factors are assumed to be affected by contextual embedding (Mayerhofer, Maier, & Schacht, Citation2016).