Abstract
In this paper, we propose a new network-growing method to accelerate learning and to extract explicit features in complex input patterns. We have so far proposed a new type of network-growing algorithm called greedy network growing algorithm (Kamimura et al. 2002, Kamimura 2003). With this algorithm, a network can grow gradually by maximizing information on input patterns. In the algorithm, the inverse of the square of the ordinary Euclidean distance between input patterns and connection weights is used to produce competitive unit outputs. When applied to some problems, the method has shown slow learning, and sometimes the method cannot produce a state where information is large enough to produce explicit internal representations. To remedy this shortcoming, we introduce here Minkowski distance between input patterns and connection weights used to produce competitive unit outputs. When the power for the Minkowski distance is larger, some detailed parts in input patterns can be eliminated, which enables networks to converge faster and to extract main parts of input patterns. We applied our new method to the famous dipole problem, a student survey on computer skills and the analysis of some economic data. In these experiments, results confirmed that, compared with the previous algorithm with Euclidean distance, the new method with Minkowski distance can significantly accelerate learning, and clearer features can be extracted.
Notes
Initial values of connections weights are given by uniform random numbers, the range of which is between − 0.0001 and 0.0001.
In Kamimura et al. (Citation2002), when we update only connections into a new competitive unit (the
unit). On the other hand, in this paper, we update all connections, whether
or not.
We used the conscience learning method provided by the Matlab neural network package with all default values except that the number of learning epochs was 1000. No improvement could be observed beyond this point.
We used SPSS for computing, with all default values.