Abstract
Feedforward multilayer neural networks implementing random input—output mappings develop characteristic correlations between the activity of their hidden nodes which are important for the understanding of their storage and generalization performance. It is shown how these correlations can be calculated within the replica-symmetric approximation. Replacing the multilayer network by an ensemble of perceptrons displaying the same correlations the relative influence of these correlations on the storage capacity can be studied.