Supervised Learning for Decorrelated Gaussian Networks

This paper presents a new two-stage learning paradigm that utilizes localization properties of Gaussian neurons. In the first stage, a single layer of Gaussian function is trained in a novel unsupervised fashion to model the distribution of the network input The input model is obtained by minimizing a cost function whose first term can be seen as an implementation of the standard Hebbian learning law. The second term of the cost function has an “anti-Hebbian” effect which reinforces the competitive learning. In the second stage of the learning paradigm, the previously obtained receptive field distribution is further used for function approximation. For comparison, a standard single hidden-layer Gaussian network is optimized with the initial centers corresponding to the first stage learning.