13
Views
8
CrossRef citations to date
0
Altmetric
Original Article

On learning simple neural concepts: from halfspace intersections to neural decision lists

&
Pages 67-85 | Received 19 Aug 1991, Published online: 09 Jul 2009

References

  • Angluin D. Queries and concept learning. Machine Learning 1988; 2: 319
  • Bartlett P L, Williamson R C. Proc. Fourth Workshop on Computational Learning Theory. Morgan Kaufmann, Palo Alto, CA 1991; 24
  • Baum E B. On learning a union of halfspace. J. Complexity 1990; 6: 67
  • Baum E B. The perceptron algorithm if fast for nonmalicious distributions. Neural Computation 1990; 2: 248
  • Baum E B. A polynomial time algorithm that learns two hidden unit nets. Neural Computation 1990; 2: 510
  • Blum A, Rivest R L. Training a 3-node neural network is NP-complete. Proc. First Workshop on Computational Learning Theory. Morgan Kaufmann, Palo Alto, CA 1988; 9
  • Blumer A, Ehrenfeucht A, Haussler D, Warmuth K. Learnability and the Vapnik-Chervonekis dimension. J. ACM 1989; 36: 929
  • Blumer A, Ehrenfeucht A, Haussler D, Warmuth K. Occam's razor. Inf. Process. Lett. 1987; 24: 377
  • Chvatal V. A greedy heuristic for the set-covering problem. Math. Oper. Res. 1979; 4: 3
  • Cover T M. Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Trans. Electron. Comput. 1965; EC-14: 326
  • Dantzig G B. Linear programming and extensions. Princeton University Press, Princeton, NJ 1963
  • Duda R O, Hart P E. Pattern Classification and Scene Analysis. Wiley, New York 1973
  • Fahlman S E, Lebière C. The cascade-correlation learning architecture. Adv. Neural Information Processing Systems 1990; 2: 524
  • Frean M. The upstart algorithm: A method for constructing and training neural networks. Neural Computation 1990; 2: 198
  • Gallant S I. Perceptron-based learning algorithms. IEEE Trans. Neural Networks 1990; NN-1: 179
  • Gary M R, Johnson D S. Computers and Intractability, A Guide to the Theory of NP-completeness. Freeman, New York 1979
  • Golea M, Marchand M, Hancock T. On learning μ-perceptron networks with binary weights NIPS*92 to appear. 1992
  • Golea M, Marchand M. A growth algorithm for neural network decision trees. Europhys. Lett. 1990; 12: 205
  • Minsky M, Papert S. Perceptrons. MIT Press, Cambridge, MA 1988
  • Hancock T, Golea M, Marchand M. Learning nonoverlapping perceptron networks from examples and membership queries. Harvard University Preprint 1991, TR-26-91 (Submitted to Machine Learning)
  • Haussler D. Quantifying inductive bias: AI learning algorithms and Valiant's learning framework. Artif. Intell. 1988; 36: 177
  • Holte R C. Very simple classification rules perform well on most data sets TR-91-16, Computer Science. University of Ottawa. 1991, (to appear in Machine Learning)
  • Johnson D S. Approximation algorithms for combinatorial problems. J. Comput. System. Sci. 1974; 9: 256
  • Johnson D S, Preparata F P. The densest hemisphere problem. Theor. Comput. Sci. 1978; 6: 93
  • Judd J S. Neural Network Design and the Complexity of Learning. MIT Press, Cambridge, MA 1990
  • Karmarkar N. A new polynomial time algorithm for linear programming. Combinatorica 1984; 4: 373
  • Lin J H, Vitter J S. Complexity results on learning by neural nets. Machine Learning 1991; 6: 211
  • Marchand M, Golea M, Ruján P. A convergence theorem for sequential learning in two-layer perceptrons. Europhys. Lett. 1990; 11: 487
  • Mézard M, Nadal J-P. Learning in feedforward neural network: the tiling algorithm. J. Phys. A: Math. Gen. 1989; 22: 2191
  • Nadal J-P. Study of a growth algorithm for a feedforward network. Int. J. Neural Systems 1989; 1: 55
  • Papadimitriou C H, Steiglitz K. Combinatorial Optimization. Prentice-Hall, Englewood Cliffs, NJ 1982
  • Pitt L, Valiant L G. Computational limitations on learning from examples. J ACM 1988; 35: 965
  • Preparata F P, Shamos M I. Computational Geometry. Springer, New York 1985
  • Quinlan J R. Induction of decision trees. Machine Learning 1986; 1: 81
  • Rivest R L. Learning decision lists. Machine Learning 1987; 2: 229
  • Rujan P. A fast method for calculating the perceptron with maximal stability. Preprint 1992, (submitted to Neural Computation)
  • Ruján P, Marchand M. Learning by minimizing resources in neural networks. Complex Systems 1989; 3: 229
  • Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors. Nature 1986; 323: 533
  • Rumelhart D E, McClelland J L. Parallel Distributed Processing. MIT Press, Cambridge, MA 1986; vol 1
  • Shapiro A D. Structured Induction and Expert Systems. Addison-Wesley, Wokingham, UK 1987
  • Sirat J A, Nadal J P. Neural trees: a new efficient tool for classification. Network 1990; 1: 423
  • Smale S. On the average number of steps of the simplex method of linear programming. Math. Program. 1983; 27: 241
  • Utgoff P E. Perceptron trees: a case study in hybrid concept representation. Connection Sci. 1989; 1: 377
  • Valiant L G. A theory of the learnable. Commun. ACM 1984; 27: 1134
  • Valiant L G, Warmuth M K. The border augmented symmetric differences of halfspaces is learnable. unpublished 1989

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.