l2, 1 Regularized correntropy for robust feature selection

In this paper, we study the problem of robust feature extraction based on l2,1 regularized correntropy in both theoretical and algorithmic manner. In theoretical part, we point out that an l2,1-norm minimization can be justified from the viewpoint of half-quadratic (HQ) optimization, which facilitates convergence study and algorithmic development. In particular, a general formulation is accordingly proposed to unify l1-norm and l2,1-norm minimization within a common framework. In algorithmic part, we propose an l2,1 regularized correntropy algorithm to extract informative features meanwhile to remove outliers from training data. A new alternate minimization algorithm is also developed to optimize the non-convex correntropy objective. In terms of face recognition, we apply the proposed method to obtain an appearance-based model, called Sparse-Fisherfaces. Extensive experiments show that our method can select robust and sparse features, and outperforms several state-of-the-art subspace methods on largescale and open face recognition datasets.

[1]  Wen Gao,et al.  The CAS-PEAL Large-Scale Chinese Face Database and Baseline Evaluations , 2008, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[2]  Paul S. Bradley,et al.  Feature Selection via Concave Minimization and Support Vector Machines , 1998, ICML.

[3]  Feiping Nie,et al.  Efficient and Robust Feature Selection via Joint ℓ2, 1-Norms Minimization , 2010, NIPS.

[4]  Zhengyou Zhang,et al.  Parameter estimation techniques: a tutorial with application to conic fitting , 1997, Image Vis. Comput..

[5]  Ran He,et al.  A Regularized Correntropy Framework for Robust Pattern Recognition , 2011, Neural Computation.

[6]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[7]  Jian Yang,et al.  Robust sparse coding for face recognition , 2011, CVPR 2011.

[8]  Weifeng Liu,et al.  Correntropy: Properties and Applications in Non-Gaussian Signal Processing , 2007, IEEE Transactions on Signal Processing.

[9]  Jiawei Han,et al.  Spectral Regression for Efficient Regularized Subspace Learning , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[10]  Patrick J. Flynn,et al.  Overview of the face recognition grand challenge , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[11]  Bao-Gang Hu,et al.  Robust feature extraction via information theoretic learning , 2009, ICML '09.

[12]  Massimiliano Pontil,et al.  Multi-Task Feature Learning , 2006, NIPS.

[13]  Zi Huang,et al.  Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence ℓ2,1-Norm Regularized Discriminative Feature Selection for Unsupervised Learning , 2022 .

[14]  Ran He,et al.  Maximum Correntropy Criterion for Robust Face Recognition , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[15]  José Carlos Príncipe,et al.  Compressed signal reconstruction using the correntropy induced metric , 2008, 2008 IEEE International Conference on Acoustics, Speech and Signal Processing.

[16]  Feiping Nie,et al.  Feature Selection via Joint Embedding Learning and Sparse Regression , 2011, IJCAI.

[17]  Feiping Nie,et al.  Semi-Supervised Classification via Local Spline Regression , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Isabelle Guyon,et al.  An Introduction to Variable and Feature Selection , 2003, J. Mach. Learn. Res..

[19]  Mila Nikolova,et al.  Analysis of Half-Quadratic Minimization Methods for Signal and Image Recovery , 2005, SIAM J. Sci. Comput..

[20]  Guillermo Sapiro,et al.  Sparse Representation for Computer Vision and Pattern Recognition , 2010, Proceedings of the IEEE.

[21]  Jiawei Han,et al.  Joint Feature Selection and Subspace Learning , 2011, IJCAI.

[22]  Li Wang,et al.  Hybrid huberized support vector machines for microarray classification , 2007, ICML '07.