Abstract
The fact that neural networks may be employed in the task of image compression has been pointed out in recent research. The predominant approach to image compression centres around the back-propagation algorithm for training on overlapping frames of the original picture. Several deficiencies of this approach can be identified. First, no potential time bounds are provided for compressing images. Second, utilization of back propagation is difficult because of its computational complexity. To overcome these shortcomings we propose a different approach, concentrating on a general class of 3-layer neural networks of 2(N+1) hidden units. It will be shown that the class 𝒩* can uniquely represent a large number of images; in fact, growth of this class is larger than exponential. Instead of training a network, it is automatically constructed. The construction process can be accomplished in time Oworst(n)=n4−n2, where n is the image size. Obtainable (lossless) compression rates exceed 97% for square images of size 256.