Massive Weight Sharing: A Cure For Extremely Ill-Posed Problems

In most learning problems, adaptation to given examples is well-posed because the number of examples far exceeds the number of internal parameters in the learning machine. Extremely ill-posed learning problems are, however, common in image and spectral analysis. They are characterized by a vast number of highly correlated inputs, e.g. pixel or pin values, and a modest number of patterns, e.g. images or spectra. In this paper we show, for the case of a set of pet images diiering only in the values of one stimulus parameter, that it is possible to train a neural network to learn the underlying rule without using an excessive number of network weights or large amounts of computer time. The method is based upon the observation that the standard learning rules conserve the subspace spanned by the input images.