Abstract
Using techniques and concepts developed in the theory of spin glasses we consider the optimization of the performance of simple attractor neural networks trained with noise and further analyse some of the consequences of such optimal training.