Random sampling LDA incorporating feature selection for face recognition

Classical Linear Discriminant Analysis(LDA) is usually suffers from the small sample size(SSS) problem when dealing with the high dimensional face data. Many methods have been proposed for solving this problem such as Fisherface and Null Space LDA(N-LDA), but these methods are overfitted to the training set and inevitably lose some useful discriminative information in many cases. To effectively utilize nearly all useful discriminative information, a not completely random sampling framework for the integration of multiple features is developed. However, this method has the following main disadvantage: By directly employing feature extraction, the newly constructed variables may contain lots of information originated from those redundant features in the original space. So, in this paper, we introduce a new random sampling LDA by incorporating feature selection for face recognition, that is, some redundant features are removed using the given feature selection methods at first, and then PCA is employed, finally we use random sampling to generate multiple feature subsets. Along this, corresponding weak LDA classifiers are naturally generated and an integrated classifier is developed using a fusion rule. Experiments on 4 face datasets(AR, ORL, Yale, YaleB) show the effectiveness of our algorithm.

[1]  Ming Yang,et al.  Structured Semi-supervised Discriminant Analysis , 2009, 2009 International Conference on Wavelet Analysis and Pattern Recognition.

[2]  Tin Kam Ho,et al.  The Random Subspace Method for Constructing Decision Forests , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  Xiaogang Wang,et al.  Random sampling LDA for face recognition , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[4]  A. Martínez,et al.  The AR face databasae , 1998 .

[5]  I. Jolliffe Principal Component Analysis , 2002 .

[6]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[7]  Aleix M. Martinez,et al.  The AR face database , 1998 .

[8]  Yulian Zhu,et al.  Semi-random subspace method for face recognition , 2009, Image Vis. Comput..

[9]  David J. Kriegman,et al.  Acquiring linear subspaces for face recognition under variable lighting , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[10]  R. Chellappa Introduction of New Editor-in-Chief , 2005 .

[11]  David J. Kriegman,et al.  Recognition using class specific linear projection , 1997 .

[12]  Christopher M. Bishop,et al.  Neural networks for pattern recognition , 1995 .

[13]  Raymond J. Mooney,et al.  Creating diversity in ensembles using artificial data , 2005, Inf. Fusion.

[14]  Ming Yang,et al.  A novel condensing tree structure for rough set feature selection , 2008, Neurocomputing.

[15]  Robert P. W. Duin,et al.  Bagging, Boosting and the Random Subspace Method for Linear Classifiers , 2002, Pattern Analysis & Applications.

[16]  Yoav Freund,et al.  Boosting the margin: A new explanation for the effectiveness of voting methods , 1997, ICML.

[17]  Jiawei Han,et al.  Semi-supervised Discriminant Analysis , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[18]  David J. Kriegman,et al.  Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection , 1996, ECCV.

[19]  Xiaogang Wang,et al.  Random Sampling for Subspace Face Recognition , 2006, International Journal of Computer Vision.

[20]  Ja-Chen Lin,et al.  A new LDA-based face recognition system which can solve the small sample size problem , 1998, Pattern Recognit..

[21]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[22]  Terry Windeatt,et al.  Accuracy/Diversity and Ensemble MLP Classifier Design , 2006, IEEE Transactions on Neural Networks.