Two-dimensional canonical correlation analysis and its application in small sample size face recognition

In the traditional canonical correlation analysis (CCA)-based face recognition methods, the size of sample is always smaller than the dimension of sample. This problem is so-called the small sample size (SSS) problem. In order to solve this problem, a new supervised learning method called two-dimensional CCA (2DCCA) is developed in this paper. Different from traditional CCA method, 2DCCA directly extracts the features from image matrix rather than matrix to vector transformation. In practice, the covariance matrix extracted by 2DCCA is always full rank. Hence, the SSS problem can be effectively dealt with by this new developed method. The theory foundation of 2DCCA method is first developed, and the construction method for the class-membership matrix Y which is used to precisely represent the relationship between samples and classes in the 2DCCA framework is then clarified. Simultaneously, the analytic form of the generalized inverse of such class-membership matrix is derived. From our experiment results on face recognition, we clearly find that not only the SSS problem can be effectively solved, but also better recognition performance than several other CCA-based methods has been achieved.

[1]  Ja-Chen Lin,et al.  A new LDA-based face recognition system which can solve the small sample size problem , 1998, Pattern Recognit..

[2]  Hans Knutsson,et al.  Learning multidimensional signal processing , 1998, Proceedings. Fourteenth International Conference on Pattern Recognition (Cat. No.98EX170).

[3]  Anil K. Jain,et al.  Small Sample Size Effects in Statistical Pattern Recognition: Recommendations for Practitioners , 1991, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  W. Zheng,et al.  Facial expression recognition using kernel canonical correlation analysis (KCCA) , 2006, IEEE Transactions on Neural Networks.

[5]  Alejandro F. Frangi,et al.  Two-dimensional PCA: a new approach to appearance-based face representation and recognition , 2004 .

[6]  Suzanna Becker,et al.  Mutual information maximization: models of cortical self-organization. , 1996, Network.

[7]  Horst Bischof,et al.  Appearance models based on kernel canonical correlation analysis , 2003, Pattern Recognit..

[8]  M. Barker,et al.  Partial least squares for discrimination , 2003 .

[9]  Jing-Yu Yang,et al.  Optimal discriminant plane for a small number of samples and design method of classifier on the plane , 1991, Pattern Recognit..

[10]  Shiann-Jeng Yu,et al.  Direct blind channel equalization via the programmable canonical correlation analysis , 2001, Signal Process..

[11]  Yan Liu,et al.  A new method of feature fusion and its application in image recognition , 2005, Pattern Recognit..

[12]  Keinosuke Fukunaga,et al.  Introduction to Statistical Pattern Recognition , 1972 .

[13]  Franklin A. Graybill,et al.  Theory and Application of the Linear Model , 1976 .

[14]  H. Hotelling Relations Between Two Sets of Variates , 1936 .

[15]  Jian Yang,et al.  Why can LDA be performed in PCA transformed space? , 2003, Pattern Recognit..

[16]  J. Friedman Regularized Discriminant Analysis , 1989 .

[17]  Ming Li,et al.  2D-LDA: A statistical linear discriminant analysis for image matrix , 2005, Pattern Recognit. Lett..