551
Views
17
CrossRef citations to date
0
Altmetric
Articles

Communicative need in colour naming

, , &
Pages 312-324 | Received 02 Oct 2018, Accepted 01 Apr 2019, Published online: 26 Apr 2019
 

ABSTRACT

Colour naming across languages has traditionally been held to reflect the structure of colour perception. At the same time, it has often, and increasingly, been suggested that colour naming may be shaped by patterns of communicative need. However, much remains unknown about the factors involved in communicative need, how need interacts with perception, and how this interaction may shape colour naming. Here, we engage these open questions by building on general information-theoretic principles. We present a systematic evaluation of several factors that may reflect need, and that have been proposed in the literature: capacity constraints, linguistic usage, and the visual environment. Our analysis suggests that communicative need in colour naming is reflected more directly by capacity constraints and linguistic usage than it is by the statistics of the visual environment.

Acknowledgments

We thank Joshua Abbott for helpful discussions, and Delwin Lindsey and Angela Brown for kindly sharing their English colour-naming data with us.

Disclosure statement

No potential conflict of interest was reported by the authors.

Correction Statement

This article has been republished with minor changes. These changes do not impact the academic content of the article.

Notes

1 For simplicity, since it is assumed that each colour invokes a unique mental representation, we will treat c and mc interchangeably when the distinction between them does not matter. For example, for any colour naming distribution p(w|c) or prior p(c), it holds that q(w|mc)=p(w|c) and p(mc)=p(c).

2 This naming channel is internal to the speaker, and it is distinct from the communication channel between the listener and speaker. The latter takes as input the word produced by the speaker and outputs the word perceived by the listener. The communication channel is left implicit in (a) because this channel is assumed to be noiseless—i.e., the listener observes the speaker’s word unaltered.

3 For compatibility with the analysis performed by Zaslavsky et al. (Citation2018), we followed their regularization process and excluded fifteen languages from our quantitative evaluation (). We also repeated the evaluation process with all languages and obtained similar results; thus the regularization process does not influence our conclusions.

4 We used the python package cvxopt to solve this optimization problem. In general, it is possible that the feasible set would be empty, i.e. that there would be no prior that satisfies the constraints. However, this is not the case in our setting.

5 We considered the 2014 training dataset which contains 82,783 images. These images are annotated with object boundaries for objects from 80 different categories.

6 Conversion from RGB to CIELAB coordinates was done with the colorspacious python package, using illuminant C. For the achromatic chips, only pixels with ΔE2=(L)2+(a)2+(b)2<70 were considered. For the chromatic chips, the comparison was based only on lightness and hue values, and pixels for which the square distance to the closest chromatic chip was greater than 400 were excluded. These thresholds were validated by manual inspection, to ensure that the converted pixels are indeed perceptually similar to the original ones.

7 The TF and the FG priors have similar structure and both give the highest probability mass to the achromatic colours. However, the FG prior gives less weight to the achromatic chips than the TF prior does. In addition, according to the FG prior, warm colours have higher probability than cool colours, similar to the SW prior we estimated, and consistent with the salience data of Gibson et al. (Citation2017).

8 These two measures correspond to εl and gNID respectively. See (Zaslavsky et al., Citation2018) for more detail.

9 We thank Delwin Lindsey for drawing our attention to this connection.

Additional information

Funding

This work was supported by DTRA (Defense Threat Reduction Agency) award HDTRA11710042 (NZ, TR) and the Gatsby Charitable Foundation (NZ, NT).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.