Abstract
Feedforward neural networks with several input units, a single output unit and a single hidden layer of sigmoid transfer functions are considered. A method is presented of interpreting the role of a hidden layer unit geometrically, when using such networks for real-valued function approximation problems, in terms of regions of the input space over which it is most important in a particular sense. The relationship between this interpretation and the weight values of the network is highlighted. Then, for the case in which the approximation is of most interest over a bounded region of the input space, it is shown that this interpretation may then be used to check for redundancy among hidden units, and to remove any such units found. Finally, future research issues for which the interpretation may be useful are briefly discussed.