476
Views
40
CrossRef citations to date
0
Altmetric
Regular articles

How useful are corpus-based methods for extrapolating psycholinguistic variables?

, &
Pages 1623-1642 | Received 10 May 2014, Accepted 24 Oct 2014, Published online: 19 Feb 2015
 

Abstract

Subjective ratings for age of acquisition, concreteness, affective valence, and many other variables are an important element of psycholinguistic research. However, even for well-studied languages, ratings usually cover just a small part of the vocabulary. A possible solution involves using corpora to build a semantic similarity space and to apply machine learning techniques to extrapolate existing ratings to previously unrated words. We conduct a systematic comparison of two extrapolation techniques: k-nearest neighbours, and random forest, in combination with semantic spaces built using latent semantic analysis, topic model, a hyperspace analogue to language (HAL)-like model, and a skip-gram model. A variant of the k-nearest neighbours method used with skip-gram word vectors gives the most accurate predictions but the random forest method has an advantage of being able to easily incorporate additional predictors. We evaluate the usefulness of the methods by exploring how much of the human performance in a lexical decision task can be explained by extrapolated ratings for age of acquisition and how precisely we can assign words to discrete categories based on extrapolated ratings. We find that at least some of the extrapolation methods may introduce artefacts to the data and produce results that could lead to different conclusions that would be reached based on the human ratings. From a practical point of view, the usefulness of ratings extrapolated with the described methods may be limited.

Notes

1 For random forest the function that describes the relationship does not have to be linear or even continuous.

2 We later compared the result of this procedure with a standard MinHash approach to removing near-duplicates (Broder, Citation1997). The resulting sets of files overlapped in 98.5%.

3 We used a dataset obtained from the authors of the original study (Kuperman et al., Citation2012). The dataset did not correspond perfectly to the one on which the published ratings were based and which was used to train the models but had a very high correlation (r = .96) with that dataset. A total of 705 words that were not included in the dataset were excluded from the analysis.

Log in via your institution

Log in to Taylor & Francis Online

There are no offers available at the current time.

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.