A Hebbian/anti-Hebbian network which optimizes information capacity by orthonormalizing the principal subspace

A number of recent papers have used the approach of maximising information capacity or mutual information (MI) to examine unsupervised neural networks. In particular, for a linear 'compressing' N-input M-output network (N>M), with noise on the input only, the author (1991) maximised MI when the output represents the principal subspace (or top M principal components) of the input. On the other hand, for a linear 'straight-through' M-input M-output network with noise on the output only, he maximised MI (for a fixed output power) when the outputs are orthonormalised, i.e. decorrelated and of equal variance. A number of algorithms exist to achieve both of these optimal arrangements. In this paper, the author extends this work to develop an algorithm for the case of both input and output noise, with an output power constraint. He finds that it is possible to simplify the obvious algorithm obtained by concatenating the two previous solutions. >