34
Views
0
CrossRef citations to date
0
Altmetric
Original Article

A geometric interpretation of hidden layer units in feedforward neural networks

Pages 19-25 | Received 07 Oct 1991, Published online: 09 Jul 2009
 

Abstract

Feedforward neural networks with several input units, a single output unit and a single hidden layer of sigmoid transfer functions are considered. A method is presented of interpreting the role of a hidden layer unit geometrically, when using such networks for real-valued function approximation problems, in terms of regions of the input space over which it is most important in a particular sense. The relationship between this interpretation and the weight values of the network is highlighted. Then, for the case in which the approximation is of most interest over a bounded region of the input space, it is shown that this interpretation may then be used to check for redundancy among hidden units, and to remove any such units found. Finally, future research issues for which the interpretation may be useful are briefly discussed.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.