Linear feature extraction in networks with lateral connections

Presents a novel unsupervised learning paradigm for feature extraction in linear networks with lateral connections under the constraint that no information distortion occurs in the input-output map. The latter is guaranteed by restricting the Jacobian of the linear input-output transformation to remain equal to one during the learning process. Under the assumption that the input signals are Gaussian, the presented learning progressively minimizes the redundancy at the output layer until a factorial output representation is obtained. The redundancy is characterized by a suitably chosen entropy function whose minimum corresponds to decorrelation of the network outputs. The learning paradigm is based on the Lyapunov arguments and it is derived for networks with symmetric and anti-symmetric lateral connections. Examples which validate the introduced learning paradigm are presented.<<ETX>>