478
Views
8
CrossRef citations to date
0
Altmetric
Articles

Vector Space Applications in Metaphor Comprehension

 

ABSTRACT

Although vector space models of word meaning have been successful in modeling many aspects of human semantic knowledge, little research has explored figurative language, such as metaphor, using word vector representations. This article reviews the small body of research that has applied such representations to computational models of metaphor. After providing a short review of vector space models, a detailed overview of metaphor models that make use of vector space, and the relevant empirical findings are discussed. These models are divided into two categories based on their differing motivations: “psychological” models are motivated by modeling the cognitive processes involved in metaphor comprehension whereas “paraphrase” models seek to find the most efficient and accurate way for a computer to paraphrase metaphorical language. These models have been successful in computing adequate metaphor interpretations and shed light on the cognitive processes involved in comprehending metaphor.

Acknowledgments

We thank Marc Joanisse, Ken McRae, and Paul Minda for their helpful suggestions.

Notes

1 Neural networks operate by detecting patterns in data and learn from exemplars (or experience) rather than a set of programmed rules. Neural networks usually have at least three layers of processing units (or neurons): an input layer, a hidden layer, and an output layer. The input layer receives the data and the output layer provides some response to this data, which can vary depending on the processing task. The information is passed from input to output through a hidden layer, which is a set of neurons that further process the information. This layer can be thought of as a set of learned features that the model deems important for predictions (Touretzky & Pomerleau, Citation1989). One of the key components of neural networks are the weights between neurons, which determine which connections between neurons should be weighted more heavily in predictions. The weights are adjusted as the model is trained on exemplars through a process called “backpropagation,” in which the output of the model is compared with the desired response (Rumelhart et al., Citation1986). After each exemplar, the error is fed back through the model and the weights are adjusted to minimize the error. For example, when the connection between two particular neurons contributes to the error, the weight between them is decreased, thus improving the model. This is how the model learns from experience—with each training exemplar the model recalibrates to provide a better prediction. Typically once the model reaches some criteria of accuracy, the weights are recorded and are used to make predictions on a new set of exemplars.

2 Cosines are similar to correlation coefficients in that they vary in strength from 0 to 1 with larger values representing a stronger relationship. However, unlike correlation, negative cosine values are rare.

3 Terai and Nakagawa’s (Citation2012) model is an exception to this. Because of the nature of their semantic space, the metaphor vector can be interpreted in terms to the probabilities of different features given the metaphor.

Additional information

Funding

This research was supported by Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant 06P0070.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.