Abstract
Modern electronic systems, especially sensor and imaging systems, are beginning to incorporate their own neural network sub-systems. In order for these neural systems to learn in real-time they must be implemented using VLSI technology, with as much of the learning processes incorporated on-chip as is possible. Most VLSI implementations literally implement a series of neural processing cells, which can be connected together in an arbitrary fashion. The work presented here utilises two-dimensional instruction systolic arrays in an attempt to define a general neural architecture which is closer to the biological basis of neural networks-it is the synapses themselves, rather than the neurons, that have dedicated processing units. The architecture has been defined, along with a limited instruction set, and has been shown to operate correctly under simulation for the backpropagation training algorithm.