Artificial neural network for nonlinear projection of multivariate data

The authors propose a learning algorithm to train a multilayer feedforward neural network to perform the well-known Sammon nonlinear projection. The learning algorithm is an extension of the backpropagation algorithm. A significant advantage of the network-based projection over the original Sammon algorithm is that the trained network is able to project new patterns. Experimental results indicate that the projection network has good generalization capability when an appropriately sized training set and network are utilized. A lower bound for the number of free parameters required to achieve the same representation power as Shannon's algorithm is derived. This lower bound, together with the generalization capability, provides some guidelines about the size of the network that should be used.<<ETX>>

[1]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1992, Math. Control. Signals Syst..

[2]  Nigel Goddard,et al.  Rochester Connectionist Simulator , 1989 .

[3]  Teuvo Kohonen,et al.  Self-Organization and Associative Memory , 1988 .

[4]  Erkki Oja,et al.  Neural Networks, Principal Components, and Subspaces , 1989, Int. J. Neural Syst..

[5]  John W. Sammon,et al.  A Nonlinear Mapping for Data Structure Analysis , 1969, IEEE Transactions on Computers.

[6]  Gautam Biswas,et al.  Evaluation of Projection Algorithms , 1981, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  J. Rubner,et al.  A Self-Organizing Network for Principal-Component Analysis , 1989 .

[8]  Anil K. Jain,et al.  Algorithms for Clustering Data , 1988 .

[9]  Ying Zhao,et al.  Projection pursuit learning , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[10]  Terence D. Sanger,et al.  Optimal unsupervised learning in a single-layer linear feedforward neural network , 1989, Neural Networks.