References
- Almeida L B (1987) A learning rule for asynchronous perceptrons with feedback in a combinatorial environment. Proc. 1st Int. Conf. on Neural Networks, San Deigo, June, 1987. IEEE, Piscataway, NJ, vol II: 609–18
- Amari S. Natural gradient works efficiently in learning. Neural Comput. 1998; 10: 251–76
- Amit D J, Tsodyks M V. Quantitative study of attractor neural network retrieving at low spike rates: I. substrate-spikes, rates and neuronal gain. Network: Comput. Neural Syst. 1991; 2: 259–73
- Arbib M A. Brain, Machines and Mathematics. Springer, Berlin 1987; 63–4
- Blum A L, Rivest R L. Training a 3-node neural network is NP-complete. Neural Networks 1992; 5: 117–27
- Cooper D B. Nonsupervised adaptive signal detection and pattern recognition. Information Control 1964; 7: 416–44
- Douglas R J, Koch C, Mahowald M A, Martin K A C, Suarez H. Recurrent excitation in neocortical circuits. Science 1995; 269: 981–5
- Durbin R, Rumelhart D E. Product units: a computationally powerful and biologically plausible extension to backpropagation networks. Neural Comput. 1989; 1: 133–42
- Eckhorn R, Frien A, Bauer R, Woelborn T, Kehr H. High-frequency (60-90 Hz) oscillations in primary visual cortex of awake monkey. NeuroReport 1993; 4: 243–6
- Fitzsimonds R M, Song H-J, Poo M-M. Propagation of activity-dependent synaptic depression in simple neural networks. Nature 1997; 388: 439–48
- Frien A, Eckhorn R, Bauer R, Woelborn T, Kehr H. Stimulus-specific fast oscillations at zero phase between visual areas V1 and V2 of awake monkey. NeuroReport 1994; 5: 2273–7
- Gorse D, Shepherd A J, Taylor J G. The new era in supervised learning. Neural Networks 1997; 10: 343–52
- Hahnloser R H R (1998) Generating network trajectories using gradient descent in state space. Proc. Int. Joint Conf. on Neural Networks, Anchorage, AK, May, 1998. IEEE, Piscataway, NJ, 2373–237
- Hansel D, Sompolinsky H. Modeling feature selectivity in local cortical circuits. Methods in Neuronal Modeling2nd edn, C Koch, I Segev. MIT Press, Cambridge, MA 1997, ch 13
- Hebb D O. The Organization of Behavior. Wiley, New York 1949
- Hopfield J J. Neurons with graded response have collective properties like those of two-state neurons. Proc. Natl Acad. Sci., USA 1984; 81: 3088–92
- Kawato M. Cerebellum and motor control. Brain Theory and Neural Networks. MIT Press, Cambridge, MA 1995; 172–8
- Kohonen T. Self-Organization and Associative Memory. Springer, Berlin 1984, ch 2
- Magee J C, Johnston D. A synaptically controlled, associative signal for hebbian plasticity in hippocampal neurons. Science 1997; 275: 1593–9
- Markram H, Tsodyks M. Redistribution of synaptic efficacy between cortical pyramidal neurons. Nature 1996; 382: 807–10
- Marr D. A theory of cerebellar cortex. J. Physiol. 1969; 202: 437–70
- Pearlmutter B A. Learning state space trajectories in recurrent neural networks. Neural Comput. 1989; 1: 263–9
- Pineda F J. Generalization of back-propagation to recurrent neural networks. Phys. Rev. Lett. 1987; 59: 2229–32
- Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors. Nature 1986; 323: 533–6
- Schultz W, Dayan P, Montague P R. A neural substrate of prediction and reward. Science 1997; 275: 1593–9
- Sejnowki T J. Storing covariance with nonlinearly interacting neurons. J. Math. Biol. 1977; 4: 303–21
- Steriade M, McCormick D, Sejnowski T J. Thalamocortical oscillations in the sleeping and aroused brain. Science 1993; 262: 679–85
- White R H (1990) The learning rate in back-propagation systems: An application of Newton's method. Proc. 1st Int. Joint Conf. on Neural Networks, San Deigo, CA, June, 1990. IEEE, Piscataway, NJ, vol 1: 679–84
- Widrow B, Hoff M E. Adaptive switching circuits. IRE WESCON Convention Record. 1960; 96–104
- Williams R J, Zipser D. A learning algorithm for continually running fully recurrent networks. Neural Comput. 1989; 1: 270–80