Successive learning of linear discriminant analysis: Sanger-type algorithm

Linear discriminant analysis (LDA) is applied to broad areas, e.g. image recognition. However, successive learning algorithms for LDA are not sufficiently studied while they have been well established for principal component analysis (PCA). A successive learning algorithm which does not need N/spl times/N matrices has been proposed for LDA (Hiraoka and Hamahira, 1999, and Hiraoka et al., 2000), where N is the dimension of data. In the present paper, an improvement of this algorithm is examined based on Sanger's (1989) idea. By the original algorithm, we can obtain only the subspace which is spanned by major eigenvectors. On the other hand, we can obtain major eigenvectors themselves by the improved algorithm.

[1]  Satoru Hayamizu,et al.  Gesture recognition using HLAC features of PARCOR images and HMM based recognizer , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[2]  Hiroshi Mizoguchi,et al.  Convergence analysis of online linear discriminant analysis , 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium.

[3]  Anil K. Jain,et al.  Artificial neural networks for feature extraction and multivariate data projection , 1995, IEEE Trans. Neural Networks.

[4]  Vwani P. Roychowdhury,et al.  On self-organizing algorithms and networks for class-separability features , 1997, IEEE Trans. Neural Networks.

[5]  E. Oja Simplified neuron model as a principal component analyzer , 1982, Journal of mathematical biology.

[6]  Terence D. Sanger,et al.  Optimal unsupervised learning in a single-layer linear feedforward neural network , 1989, Neural Networks.