ABSTRACT
Two main goals of the emerging field of neurocognitive poetics are (a) the use of more natural and ecologically valid stimuli, tasks and contexts and (b) providing methods and models allowing to quantify distinctive features of verbal materials used in such tasks and contexts and their effects on readers responses. A natural key element of poetic language, metaphor, still is understudied insofar as relatively little empirical research looked at literary or poetic metaphors. An exception is Katz et al.’s corpus of 204 literary metaphors by authors such as Shakespeare or Dylan Thomas, for which various rating data are available. We reanalyzed their corpus using a combination of quantitative narrative analysis, latent semantic analysis, and machine learning in order to identify relevant features of the metaphors that influenced the ratings. The combined application of computational tools sheds light on surface and affective-semantic features that co-determine the reception of poetic metaphors and successfully predicted the period of origin (i.e., early vs. late), authorship (e.g., Byron vs. Donne) and goodness ratings of the metaphors. The present results can be used for generating quantitative hypotheses or selecting and matching verbal stimuli in empirical studies of literature and neurocognitive poetics.
Notes
1 We use the term “nonpoetic” loosely, not meaning that a given metaphor from these studies could be viewed by at least one reader to have certain poetic qualities—depending on context—but simply that these metaphors were not taken from literary/poetic sources.
2 Aptness nominally refers to the extent to which the metaphor vehicle (source/ground) captures important aspects of the topic (tenor/target) and is often used as a proxy for the literary or poetic quality of a metaphor.
3 We are grateful to AN Katz for pointing this out.
4 Singular Value Decomposition/SVD values.
5 Number of layers = 100, splits per tree = 3, learning rate = 0.1.