7
Views
0
CrossRef citations to date
0
Altmetric
Original Articles

Bounds on learning in polynomial time

&
Pages 1495-1505 | Published online: 13 Aug 2009
 

Abstract

The performance of large neural networks can be judged not only by their storage capacity but also by the time required for learning. A polynomial learning algorithm with learning time α N 2 in a network with N units might be practical whereas a learning time α exp N would allow rather small networks only. The question of absolute storage capacity αc and capacity for polynomial learning rules αp is discussed for several feedforward architectures, the perception, the binary perceptron, the committee machine and a perceptron with fixed weights in the first layer and adaptive weights in the second layer. The analysis is based partially on dynamic mean-field theory which is valid for N → ∞. In particular, for the committee machine a value αp considerably lower than the capacity predicted by replica theory or simulations is found. This discrepancy is resolved by new simulations investigating the learning time dependence and revealing subtleties in the definition of the capacity.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.