Abstract
This paper presents a method for organizing internal representation (hidden unit patterns) in recurrent neural networks. The organized representation is expected to enable us to interpret a mechanism of networks easily and explicitly. The organized representation is supposed to have low information entropy or high information theoretical redundancy (little uncertainty). The redundancy reflects the degree of organization or structure in hidden unit patterns. Recurrent networks, especially those with a large number of hidden units, tend to produce the internal representation which is extremely distributed and the information upon input patterns is represented over a great number of hidden units. Thus, it is difficult to interpret the meaning of internal representation or the receptive fields of hidden units. To cope with this problem, we have used a complexity term proposed by Rumelhart. By using complexity term, the majority of connections tend to be zero. Thus, a modified method is used in which complexity term is normally effective only for positive connections, while the magnitude of negative connections is increased. We could observe that the information theoretical redundancy of hidden unit patterns produced by complexity term was higher than the redundancy by standard quadratic error function. This means that hidden unit patterns obtained by complexity term are more organized than those computed by standard error function. Finally, the correlation analysis confirmed that the activation of connections was positively correlated with values of redundancy. Thus, it could be inferred that the complexity term enabled connections of units to be achieved and eventually the redundancy of hidden unit patterns to increase in proportion to the activation of connections. In other words, the complexity term could be effective in generating organized activities of hidden units.