Abstract
The performance of large neural networks can be judged not only by their storage capacity but also by the time required for learning. A polynomial learning algorithm with learning time α N 2 in a network with N units might be practical whereas a learning time α exp N would allow rather small networks only. The question of absolute storage capacity αc and capacity for polynomial learning rules αp is discussed for several feedforward architectures, the perception, the binary perceptron, the committee machine and a perceptron with fixed weights in the first layer and adaptive weights in the second layer. The analysis is based partially on dynamic mean-field theory which is valid for N → ∞. In particular, for the committee machine a value αp considerably lower than the capacity predicted by replica theory or simulations is found. This discrepancy is resolved by new simulations investigating the learning time dependence and revealing subtleties in the definition of the capacity.