7
Views
12
CrossRef citations to date
0
Altmetric
Original Article

Hidden information maximization for feature detection and rule discovery

&
Pages 577-602 | Received 07 Jun 1994, Published online: 09 Jul 2009

References

  • Akiyama Y, Furuya T. An extension of the back-propagation learning rule which performs entropy maximization as well as error minimization. IEICE Technical Report NC91-6 1991
  • Atick J J. Could information theory provide an ecological theory of sensory processing. Network 1992; 3: 213–51
  • Barlow H B. Unsupervised learning. Neural Comput. 1989; 1: 295–311
  • Barlow H B, Kaushal T P, Mitchison G J. Finding minimum entropy code. Neural Comput. 1989; 1: 412–23
  • Bridle J, MacKay D, Heading A. Unsupervised classifier, mutual information and phantom targets. Neural Information Processing Systems. Morgan Kaufmann, San Mateo, CA 1992; 4: 1096–101
  • Chauvin Y. A backpropagation algorithm with optimal use of hidden units. Advances in Neural Information Processing Systems, D S Touretzky. Morgan Kaufmann, San Mateo, CA 1989; 519–26
  • Chung F L, Lee T. A node pruning algorithm for back-propagation networks. Int. J. Neural Systems 1992; 3: 301–14
  • Deco G, Finnof W, Zimmermann H G. Elimination of overtraining by a mutual information network. Proc. Int. Conf. on Artificial Neural Networks. Springer, New York 1993; 744–9
  • Gatlin L L. The information content of DNA. J. Theor. Biol. 1966; 10: 281–300
  • Gorman R P, Sejnowski T J. Analysis of hidden units in a layered network trained to classify sonar targets. Neural Networks 1988; 1: 75–89
  • Grossman T. The CHIR algorithm for feed forward networks with binary weights. Advances in Neural Information Processing Systems, D S Touretzky. Morgan Kaufmann, San Mateo, CA 1991; 3: 516–23
  • Hinton G E, McClelland J L, Rumelhart D E. Distributed Representation in Parallel Distributed Processing. MIT Press, Cambridge, MA 1986; 1: 77–109
  • Jacobs R A, Jordan M I. Computational consequences of a bias toward short connections. J. Cognitive Neurosci. 1992; 4: 323–36
  • Kamimura R. Minimum entropy method in neural networks. Proc. IEEE Int. Conf. on Neural Networks 1993; 1: 219–25
  • Kamimura R. Generation of internal representation by alpha-transformation. Advances in Neural Information Processing Systems, D S Touretzky. Morgan Kaufmann, San Mateo, CA 1994; 6: 733–40
  • Kamimura R, Nakanishi S. Information minimization to improve generalization performance. Proc. Int. Conf. on Artificial Neural Networks. 1: 717–20
  • Kato Y, Tan Y, Ejima T. A comparative study with feedforward PDP models for alphanumeric character recognition. The Transaction of the Institute of Electronics, Information and Communication Engineers D-II vol J73-D-II 1990; 8: 1249–54
  • Krogh A, Thorbegsson G I, Hertz J A. A cost function of internal representation. Advances in Neural Information Processing Systems, D S Touretzky. Morgan Kaufmann, San Mateo, CA 1991; 3: 733–40
  • Krogh A, Hertz J A, A simple weight decay can improve generalization. Advances in Neural Information Processing Systems, D S Touretzky. Morgan Kaufmann, San Mateo, CA, 4: 950–7
  • Ladefoged P. A Course in Phonetics. Harcourt Brace College Publishers, New York 1993
  • Linsker R. Self-organization in a perceptual network. Computer 1988; 21: 105–17
  • Marchman V A. Constraints on plasticity in a connectionist model of the English past tense. J. Cognitive Neurosci. 1993; 5: 215–34
  • Mozer M C, Smolensky P. Using relevance to reduce network size automatically. Connection Sci. 1989; 1: 3–16
  • Muller K R, Finke M. Constructing neural network models. Proc., 1994 Annual Conf. of Japanese Neural Network Society. 1994; 94–5
  • Omlin C W, Giles C L. Pruning recurrent neural networks for improved generalization performance. Revised Technical Report 93-6. Computer Science Department, Rensselaer Polytechnic Institute. 1993
  • Plunkett K, Marchman V, Knudsen S L. From rote learning to system building: acquiring verb morphology in children and connectionist nets. et al. Connectionist Models: Proc., 1990, Summer School, D S Touretzky. Morgan Kaufmann, San Mateo, CA 1990; 201–19
  • Redlich A N. Redundancy reduction as a strategy for unsupervised learning. Neural Comput. 1993; 5: 289–304
  • Rohwer R. The moving target training algorithm. Advances in Neural Information Processing Systems, D S Touretzky. Morgan Kaufmann, San Mateo, CA 1991; 3: 558–65
  • Rumelhart D E, Hinton G E, Williams R J. Learning internal representation by error propagation. Parallel Distributed Processing, vol 1. MIT Press, Cambridge, MA 1986; 318–62
  • Saad D, Marom E. Learning by choice of internal representation: an energy minimization approach. Complex Systems 1990; 4: 107–18
  • Sietsuma J, Dow R J F. Creating artificial neural networks that generalize. Neural Networks 1991; 4: 67–9
  • Tan Y, Kato Y, Ejima T. Error functions to improve learning speed in PDP models: A comparative study with feedforward PDP models. Trans. Inst. Electronics, Information and Communication Engineers D-II Vol J73-D-II No 12 1990; 2022–8
  • Solla S A, Levin E, Fleisher M. Accelerated learning in layered neural networks. Complex Systems 1988; 2: 625–39
  • Watanabe S. Pattern Recognition: Human and Mechanical. Wiley, New York 1985

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.