36
Views
0
CrossRef citations to date
0
Altmetric
Original Articles

Greedy network-growing algorithm with Minkowski distances

, &
Pages 157-177 | Received 17 Nov 2005, Accepted 16 May 2006, Published online: 14 Feb 2007
 

Abstract

In this paper, we propose a new network-growing method to accelerate learning and to extract explicit features in complex input patterns. We have so far proposed a new type of network-growing algorithm called greedy network growing algorithm (Kamimura et al. 2002, Kamimura 2003). With this algorithm, a network can grow gradually by maximizing information on input patterns. In the algorithm, the inverse of the square of the ordinary Euclidean distance between input patterns and connection weights is used to produce competitive unit outputs. When applied to some problems, the method has shown slow learning, and sometimes the method cannot produce a state where information is large enough to produce explicit internal representations. To remedy this shortcoming, we introduce here Minkowski distance between input patterns and connection weights used to produce competitive unit outputs. When the power for the Minkowski distance is larger, some detailed parts in input patterns can be eliminated, which enables networks to converge faster and to extract main parts of input patterns. We applied our new method to the famous dipole problem, a student survey on computer skills and the analysis of some economic data. In these experiments, results confirmed that, compared with the previous algorithm with Euclidean distance, the new method with Minkowski distance can significantly accelerate learning, and clearer features can be extracted.

Notes

Initial values of connections weights are given by uniform random numbers, the range of which is between − 0.0001 and 0.0001.

In Kamimura et al. (Citation2002), when we update only connections into a new competitive unit (the unit). On the other hand, in this paper, we update all connections, whether or not.

We used the conscience learning method provided by the Matlab neural network package with all default values except that the number of learning epochs was 1000. No improvement could be observed beyond this point.

We used SPSS for computing, with all default values.

Additional information

Notes on contributors

Osamu Uchida

¶ ¶ [email protected]

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 949.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.