67
Views
4
CrossRef citations to date
0
Altmetric
Original Articles

Internal Sigmoid Dynamics in Feedforward Neural Networks

Pages 43-73 | Published online: 01 Jul 2010
 

Abstract

Departing from the customary view of the sigmoid thresholding function as a smooth transition non-linearity introduced into multi-layer perceptron (MLP) networks to a continuously differentiable albeit slow gradient descent toward an optimal solution minimizing some error norm, here a different, more fundamental viewpoint is proposed: the intrinsic dynamics throughout the network become those of the quadratic map of theory. This new viewpoint enables valuable insights into understanding the initial, intermediate and final dynamics of supervised learning of algorithms such as the widely used backpropagation scheme. More specifically, although approximately: the weight changes in the aforementioned three learning stages correspond to the three regimes fluctuation, periodicity and fixed points of the quadratic map. The purpose of this is to examine this basic idea, to support it theoretically, by example and through literature, and to suggest the next steps in its further investigation.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.