An optimal set of code words and correntropy for rotated least squares regression

This paper presents a robust feature extraction method for face recognition based on least squares regression (LSR). Our focus is to enhance the robustness and discriminability of the LSR. First, an optimal set of code words is introduced in LSR. Compared to the traditional set of code words, this new set uses less number of code words. Furthermore, it can make the distance of the regression targets of different classes as large as possible. Then, correntropy is integrated into the LSR model for better robustness. Furthermore, considering the commonly used distance metrics such as Euclidean distance and Cosine distance in the subspace are invariant to rotation transformation, rotation is introduced as additional freedom to promote flexibility without sacrificing accuracy. Our objective function is optimized using half-quadratic (HQ) optimization, which facilitates algorithm development and convergence study. Experimental results show that our method outperforms several subspace methods for face recognition, which indicates the validity of the proposed method.

[1]  Philippe C. Besse,et al.  A L 1-norm PCA and a Heuristic Approach , 1996 .

[2]  Jie Gui,et al.  Multi-step dimensionality reduction and semi-supervised graph-based tumor classification using gene expression data , 2010, Artif. Intell. Medicine.

[3]  Wei Jia,et al.  Locality preserving discriminant projections for face and palmprint recognition , 2010, Neurocomputing.

[4]  Nojun Kwak,et al.  Principal Component Analysis Based on L1-Norm Maximization , 2008, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  Ran He,et al.  Robust Principal Component Analysis Based on Maximum Correntropy Criterion , 2011, IEEE Transactions on Image Processing.

[6]  Shuicheng Yan,et al.  Correntropy Induced L2 Graph for Robust Subspace Clustering , 2013, 2013 IEEE International Conference on Computer Vision.

[7]  Feiping Nie,et al.  Discriminative Least Squares Regression for Multiclass Classification and Feature Selection , 2012, IEEE Transactions on Neural Networks and Learning Systems.

[8]  Nuno Vasconcelos,et al.  Multiclass Boosting: Theory and Algorithms , 2011, NIPS.

[9]  Feiping Nie,et al.  Efficient and Robust Feature Selection via Joint ℓ2, 1-Norms Minimization , 2010, NIPS.

[10]  Jian Sun,et al.  Blessing of Dimensionality: High-Dimensional Feature and Its Efficient Compression for Face Verification , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[11]  Marwan Mattar,et al.  Labeled Faces in the Wild: A Database forStudying Face Recognition in Unconstrained Environments , 2008 .

[12]  Tieniu Tan,et al.  l2, 1 Regularized correntropy for robust feature selection , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[13]  Chris H. Q. Ding,et al.  R1-PCA: rotational invariant L1-norm principal component analysis for robust subspace factorization , 2006, ICML.

[14]  Avinash C. Kak,et al.  PCA versus LDA , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[15]  Weifeng Liu,et al.  Correntropy: Properties and Applications in Non-Gaussian Signal Processing , 2007, IEEE Transactions on Signal Processing.

[16]  David J. Kriegman,et al.  Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection , 1996, ECCV.

[17]  Kilian Q. Weinberger,et al.  Distance Metric Learning for Large Margin Nearest Neighbor Classification , 2005, NIPS.

[18]  John Wright,et al.  RASL: Robust Alignment by Sparse and Low-Rank Decomposition for Linearly Correlated Images , 2012, IEEE Trans. Pattern Anal. Mach. Intell..

[19]  Wei Jia,et al.  Discriminant sparse neighborhood preserving embedding for face recognition , 2012, Pattern Recognit..