Self-organizing operator map for nonlinear dimension reduction

Dimension reduction is an important problem arising e.g. in feature extraction, pattern recognition, and data compression. Often this is done using principal component analysis (PCA), but this approach is suitable only when the data are sufficiently linearly distributed. In this paper, neural network learning algorithms combining Kohonen's self-organizing map (SOM) and Oja's PCA rule are studied for the challenging task of nonlinear dimension reduction. The neural network has a structure of a self-organizing operator map where neurons, i.e. operators, are affine spaces instead of vectors. Adaptive algorithms derived from an optimization criterion are shortly reviewed, but the emphasis is on computationally more efficient and stable learning-rate free, K-means type batch algorithms. Simulations using image data show that the methods outperform the sequential methods proposed earlier.

[1]  Juha Karhunen,et al.  Representation and separation of signals using nonlinear PCA type learning , 1994, Neural Networks.

[2]  Nanda Kambhatla,et al.  Fast Non-Linear Dimension Reduction , 1993, NIPS.

[3]  J. Joutsensalo,et al.  Nonlinear data compression and representation by combining self-organizing map and subspace rule , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).

[4]  Erkki Oja,et al.  Principal components, minor components, and linear neural networks , 1992, Neural Networks.

[5]  Todd K. Leen,et al.  Fast nonlinear dimension reduction , 1993, IEEE International Conference on Neural Networks.

[6]  Teuvo Kohonen,et al.  Things you haven't heard about the self-organizing map , 1993, IEEE International Conference on Neural Networks.