478
Views
8
CrossRef citations to date
0
Altmetric
Articles

Vector Space Applications in Metaphor Comprehension

Pages 280-294 | Published online: 26 Dec 2018
 

ABSTRACT

Although vector space models of word meaning have been successful in modeling many aspects of human semantic knowledge, little research has explored figurative language, such as metaphor, using word vector representations. This article reviews the small body of research that has applied such representations to computational models of metaphor. After providing a short review of vector space models, a detailed overview of metaphor models that make use of vector space, and the relevant empirical findings are discussed. These models are divided into two categories based on their differing motivations: “psychological” models are motivated by modeling the cognitive processes involved in metaphor comprehension whereas “paraphrase” models seek to find the most efficient and accurate way for a computer to paraphrase metaphorical language. These models have been successful in computing adequate metaphor interpretations and shed light on the cognitive processes involved in comprehending metaphor.

Acknowledgments

We thank Marc Joanisse, Ken McRae, and Paul Minda for their helpful suggestions.

Notes

1 Neural networks operate by detecting patterns in data and learn from exemplars (or experience) rather than a set of programmed rules. Neural networks usually have at least three layers of processing units (or neurons): an input layer, a hidden layer, and an output layer. The input layer receives the data and the output layer provides some response to this data, which can vary depending on the processing task. The information is passed from input to output through a hidden layer, which is a set of neurons that further process the information. This layer can be thought of as a set of learned features that the model deems important for predictions (Touretzky & Pomerleau, Citation1989). One of the key components of neural networks are the weights between neurons, which determine which connections between neurons should be weighted more heavily in predictions. The weights are adjusted as the model is trained on exemplars through a process called “backpropagation,” in which the output of the model is compared with the desired response (Rumelhart et al., Citation1986). After each exemplar, the error is fed back through the model and the weights are adjusted to minimize the error. For example, when the connection between two particular neurons contributes to the error, the weight between them is decreased, thus improving the model. This is how the model learns from experience—with each training exemplar the model recalibrates to provide a better prediction. Typically once the model reaches some criteria of accuracy, the weights are recorded and are used to make predictions on a new set of exemplars.

2 Cosines are similar to correlation coefficients in that they vary in strength from 0 to 1 with larger values representing a stronger relationship. However, unlike correlation, negative cosine values are rare.

3 Terai and Nakagawa’s (Citation2012) model is an exception to this. Because of the nature of their semantic space, the metaphor vector can be interpreted in terms to the probabilities of different features given the metaphor.

Additional information

Funding

This research was supported by Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant 06P0070.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 401.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.