References
- Allenby G. Cross-validation, the Bayes theorem, and small-sample bias. J. Business Economic Stat. 1990; 8: 171–8
- Atlas L, Cole R, Connor J, El-Sharkawi M, Marks R, Muthusamy Y, Barnard E. Performance comparisons between backpropagation networks and classification trees on three real-world applications. Advances in Neural Information Processing Systems 2, D Touretzky. Morgan Kaufmann, San Mateo, CA 1990; 622–9
- Bernasconi J, Gustafson K. Human and machine quick modeling. Advances in Neural Information Processing Systems 4, J Moody, S Hansen, R Lippmann. Morgan Kaufmann, San Mateo, CA 1992; 1151–8
- Bower G, Trabasso T. Attention in Learning: Theory and Research. Wiley, New York 1968
- Bruner J, Goodnow J, Austin G. A Study of Thinking. Wiley, New York 1956
- Danyluk A (1987) The use of explanations for similarity-based learning. Proc. 10th Int. Joint Conf. on Artificial Intelligence, Italy, August, 1987, J McDermott. Morgan Kaufmann, San Mateo, CA, 274–6
- Dietterich T G, Hild H, Bakiri G. A comparative study of ID3 and back propagation for English text-to-speech mapping. Proc. 7th Int. Conf. on Machine Learning, Austin, TX. B Porter, R Mooney. Morgan Kaufmann, San Mateo, CA 1990; 24–31
- Fisher D H, McKusick K B (1989) An empirical comparison of ID3 and back-propagation. Proc. 11th Int. Joint Conf. on Artificial Intelligence. August, 1989, N S Sridharan. Morgan Kaufmann, San Mateo, CA, 788–93
- Friedmann J. Multivariate adaptive regression splines. Technical Report 102. Stanford University Laboratory for Computational Statistics. 1988
- Gluck M, Bower G. From conditioning to category learning: an adaptive network model. J. Exp. Psychol.: General 1988; 117: 227–47
- Goggin S, Johnson K, Gustafson K. Primacy and recency effects in backpropagation learning. Prog. Neural Nets 1993; 2: 271–98
- Gordon M B, Peretto P. The statistical distribution of Boolean gates in two-inputs, one-output multilayered neural networks. J. Phys. A: Math. Gen. 1990; 23: 3061–72
- Krauth W, Mézard M. Learning algorithms with optimal stability in neural networks. J. Phys. A: Math. Gen. 1987; 20: L745-52
- Kruschke J. ALCOVE: an exemplar-based connectionist model of category learning. Psychol. Rev. 1992; 99: 22–44
- Kruschke J. Human category learning: implications for backpropagation models. Connection Sci. 1993; 5: 3–36
- Mitchell T. Generalization as search. Artif. Intell. 1982; 18: 203–26
- Nosofsky R. Attention, similarity, and the identification-categorization relationship. J. Exp. Psychol.: General 1986; 115: 39–57
- Nosofsky R. Rule-plus-exception model of classification learning. Preprint 1993
- Pao Y H, Hu C H. Processing of pattern based information, Part I: Inductive methods suitable for use in pattern recognition and artificial intelligence. Advances in Information Systems Science 9. Plenum, New York 1985
- Pao Y H. Adaptive Pattern Recognition and Neural Networks. Addison Wesley, Reading, MA 1989
- Pavel M, Gluck M A, Henkle V (1988) Generalization by humans and multilayer adaptive networks. Proc. 10th Annual Conf. of the Cognitive Society, Montreal, August, 1988, V L Patel, et al. Lawrence Erlbaum Associates, Hillsdale, NJ, 680–7
- Pavel M, Gluck M A, Henkle V. Constraints on adaptive networks for modelling human generalization. Advances in Neural Information Processing Systems 1, D Touretzky. Morgan Kaufmann, San Mateo, CA 1989; 2–10
- Pazzani M, Dyer M (1987) A comparison of concept identification in human learning and network learning with the generalized delta rule. Proc. 10th Int. Joint Conf. on Artificial Intelligence, Italy, August, 1987, 147–50
- Pazzani M, Dyer M, Flowers M (1987) Using prior learning to facilitate the learning of new causal theories. Proc. 10th Int. Joint Conf. on Artificial Intelligence, Italy, August, 1981, 277–9
- Picard R, Cook R D. Cross-validation of regression models. J. Am. Stat. Ass. 1984; 79: 575–83
- Pratt L Y, Norton S W. Neural Networks and decision tree induction: exploring the relationship between two research areas. NIPS'90 Workshop #5 Summary 1990; 7
- Quinlan J R. Learning efficient classification procedures and their application to chess end games. Machine Learning: An Artificial Intelligence Approach, R S Michalski, J G Carbonell, T M Mitchell. Springer, Berlin 1984; 463–82
- Rumelhart D E, McClelland J L. Parallel Distributed Processing, D E Rumelhart, J L McClelland. MIT Press, Cambridge, MA 1986; 1: 10, 12
- Sangar T. Basis-function trees as a generalization of local variable selection methods for function approximation. Advances in Neural Information Processing Systems 3, R P Lippmann, et al. Morgan Kaufmann, San Mateo, CA 1991; 700–6
- Shavlik J, Mooney R, Towell G. Symbolic and neural learning algorithms: an experimental comparison. Mach. Learn. 1991; 6: 111–43
- Shepard R. Toward a universal law of generalization for psychological science. Science 1987; 237: 1317–23
- Tsoi A C, Pearson R A. Comparison of three classification techniques, CART, C4.5 and multilayer perceptrons. Advances in Neural Information Processing Systems 3, R P Lippmann, et al. Morgan Kaufmann, San Mateo, CA 1991; 963–9
- Weiss S, Kulikowski C. Computer Systems that Learn. Morgan Kaufmann, San Mateo, CA 1991