References
- Battiti , R. 1992 . First and Second Order Methods for Learning: Between Steepest Descent and Newton's Method . Neural Computation , 4 : 141 – 166 .
- Becker , S. and Le Cun , Y. . Improving the convergence of back-propagation learning with second order methods . Proceedings of the 1988 Connectionist Models Summer School . Sanmateo, CA. Edited by: Touretzky , D. , Hinton , G. and Sejnowski , T. pp. 29 – 37 . Morgan Kaufmann
- Carter , J.P. . Successfully using Peak Learning Rates of 10 (and greater) in the Back-Propagation Networks with the HEURISTIC Llearning Algorithm . Proceedings of IEEE, 1st Int. Conf on NN . San Diego, CA. pp. 645 – 651 .
- Evans , D.J. and Sanossian , H.Y.Y. 1993 . A Gradient Range Bases Heuristic Algorithm for Back Propagation . J. of Microcomputer Application , 16 : 179 – 188 .
- Fahlman , S. 1989 . “ Faster Learning Varitation on Back Propagation ” . In An Empirical Study, Proceedings of the 1988 Connectionist Models , 38 – 51 . Kaufman .
- Hush , D.R. and Salas , J.M. Improving the Learning Rate of Back-Propagation with the Gradient Reuse Algorithm . IEEE Int. Conf. on neural Networks . san Diego, CA. Vol. 1 , pp. 441 – 447 .
- Jacobs , R.A. 1988 . Increased Rates of Convergence through Learning Rate Adaptation . Neural Networks , 1 ( 4 ) : 295 – 308 .
- Kramer , A.H. and Vincentelli , A. 1989 . “ Efficient parallel learning algorithms for neural networks ” . In Advances in Neural Information systems , Edited by: Touretzky , D.S. Vol. 1 , 40 – 48 . San Mateo, CA : Morgan Kaufmann .
- Plaut , D. , Nowlan , S.J. and Hinton , G.E. 1989 . Experiments on Learning Back Propagation , Pittus-burg, PA : Carnegie-Mellon University . Technical Report CMU-CS 86-126
- Rumelhart , D.E. , Hinton , G.E. and Williams , R.J. 1986 . “ Learning Internal Representations by Error Propagation ” . In Parallel Distributed Processing:Explorations in the Microstructure of Cognition , Edited by: Rumelhart , D.E. and McClelland , J.L. 318 – 362 . Cambridge, MA : MIT press .
- Salomon R. Improved convergence rate of back-propagation with dynamic adaptation of the learning rate.
- Sanossian , H.Y.Y. 1993 . Neural Network Learning Algorithms , Lough-borough University . PhD Thesis
- Silva , F.M. and Almeida , L.B. 1990 . “ Acceleration Techniques for the Back Propagation Algorithm ” . In Neural Networks EURASIP Workshop , Sesim .
- Sutton , R.S. . Two Problems with Back Propagation and Other Steepest Descent Learning Procedures for Networks . Proceedings of the 8th ANN Conference of the Cognitive Science Society . pp. 823 – 831 .
- Van Der Smagt , P.P. 1994 . Minimisation Methods for Training Feedforward Neural Networks . Neural Networks , 7 ( 1 ) : 1 – 11 .
- Watrous , R.L. . Learning Algorithms for Connectionist Networks:Applied Gradient Methods of Nonlinear Optimization . Proceedings of the IEEE 1st International Conference on Neural Networks . San Diego, CA. pp. 619 – 627 .