Computing the Karhunen-Loeve Expansion with a Parallel, Unsupervised Filter System

We use the invariance principle and the principles of maximum information extraction and maximum signal concentration to design a parallel, linear filter system that learns the Karhunen-Loeve expansion of a process from examples. In this paper we prove that the learning rule based on these principles forces the system into stable states that are pure eigenfunctions of the input process.

[1]  Terence D. Sanger,et al.  Optimal unsupervised learning in a single-layer linear feedforward neural network , 1989, Neural Networks.

[2]  R. Lenz,et al.  A parallel learning filter system that learns the KL-expansion from examples , 1991, Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop.

[3]  T. Leen Dynamics of learning in linear feature-discovery networks , 1991 .

[4]  Reiner Lenz,et al.  Learning filter systems with maximum correlation and maximum separation properties , 1991 .

[5]  Reiner Lenz,et al.  Group Theoretical Methods in Image Processing , 1990, Lecture Notes in Computer Science.

[6]  Gene H. Golub,et al.  Matrix computations , 1983 .

[7]  Erkki Oja,et al.  Neural Networks, Principal Components, and Subspaces , 1989, Int. J. Neural Syst..

[8]  Reiner Lenz,et al.  Learning filter systems , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[9]  Reiner Lenz On probabilistic invariance , 1991, Neural Networks.

[10]  Ralph Linsker,et al.  Self-organization in a perceptual network , 1988, Computer.