Abstract
We study the ability of a simple neural network (a perceptron architecture, no hidden units, binary outputs) to process information in the context of an unsupervised learning task. The network is asked to provide the best possible neural representation of a given input distribution, according to some criterion taken from information theory. The authors compare various optimization criteria that have been proposed: maximum information transmission, minimum redundancy and closeness to factorial code. They show that for the perceptron one can compute the maximum information that the code (the output neural representation) can convey about the input. They show that one can use statistical mechanics techniques, such as replica techniques, to compute the typical mutual information between input and output distributions. More precisely, for a Gaussian input source with a given correlation matrix, they compute the typical mutual information when the couplings are chosen randomly. They determine the correlations between the synaptic couplings that maximize the gain of information. They analyse the results in the case of a one-dimensional receptive field.