67
Views
53
CrossRef citations to date
0
Altmetric
Original Articles

Multilayer feedforward neural networks: a canonical form approximation of nonlinearity

, &
Pages 655-672 | Received 23 Jan 1991, Published online: 27 Mar 2007
 

Abstract

The ability of a neural network to represent an input-output mapping is usually only measured in terms of the data fit according to some error criteria. This ‘black box’ approach provides little understanding of the network representation or how it should be structured. This paper investigates the topological structure of multilayer feedforward neural networks (MFNN) and explores the relationship between the numbers of neurons in the hidden layers and finite dimensional topological spaces. It is shown that a class of three layer (two hidden layer) neural networks is equivalent to a canonical form approximation of nonlinearity. This theoretical framework leads to insights about the architecture of multilayer feedforward neural networks, confirms the common belief that three layer (two hidden layer) feedforward networks are sufficient for general application and yields an approach for determining the appropriate numbers of neurons in each hidden layer.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.