ABSTRACT
In this paper, we propose self-organising maps as possible candidates to explain the psychological mechanisms underlying category generalisation. Self-organising maps are psychologically and biologically plausible neural network models that can learn after limited exposure to positive category examples, without any need of contrastive information. They reproduce human behaviour in category generalisation, in particular, the Numerosity and Variability effects, which are usually explained with Bayesian tools. Where category generalisation is concerned, self-organising maps deserve attention to bridge the gap between the computational level of analysis in Marr's hierarchy (where Bayesian models are often situated) and the algorithmic level of analysis in which plausible mechanisms are described.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes
1 Shepard (Citation1987) considers the case in which there is one single category example, whereas Tenenbaum and Griffiths (Citation2001) extend the approach to multiple examples. In this paper, we refer to Tenenbaum and Griffiths' (Citation2001) theory.
2 is indeed inversely related to precision: the higher its value, the higher the maximal distance between the category representation and the worst represented stimulus, hence the lower the precision; and viceversa, the lower the precision, the higher the maximal distance between the category representation and the worst represented stimulus, and the higher the expression's value.
3 The use of an exponential function of a distance when defining the categorisation of a stimulus, or the activation of a unit when receiving a stimulus, is common in the neural network literature (see for instance Mayor & Plunkett, Citation2010; Westermann & Mareschal, Citation2004).
4 Our model was implemented using the SOM Toolbox (http://www.cis.hut.fi/somtoolbox). This is the Matlab toolbox of reference when implementing SOMs. Our SOM's weights were initialised with the somrandinit function, and then slightly modified in order to contain values separate from the input space: to this end we have multiplied by 2 the initial values provided by somrandinit
5 We applied t-test to the Generalisation Degrees for the extreme points, those more distant from the known category examples
6 The Generalisation Degree in this case refers to a rectangle rather than to single points. This is calculated as the maximal Generalisation Degree for reference points that define the edges of each rectangle
7 This was done as in the som_randinit function of the SOM Toolbox
8 We imposed the width of the neighbourhood function to decrease to very low values so that during the last epochs of training, only one single unit was concerned at a time