Abstract
A crucial dilemma is how to increase the power of connectionist networks (CN), since simply increasing the size of today's relatively small CNs often slows down and worsens learning and performance. There are three possible ways: (1) use more powerful structures; (2) increase the amount of stored information, and the power and the variety of the basic processes; (3) have the network modify itself (learn, evolve) in more powerful ways. Today's connectionist networks use only a few of the many possible topological structures, handle only numerical values using only very simple basic processes, and learn only by modifying weights associated with links. This paper examines the great variety of potentially muck more powerful possibilities, focusing on what appear to be the most promising: appropriate brain-like structures (e.g. local connectivity, global convergence and divergence); matching, symbol-handling, and list-manipulating capabilities; and learning by extraction-generation-discovery.