Orthogonal sparsity preserving projections for feature extraction

Sparse representation has been extensively studied in the signal processing community, which surprisingly pointed out that one target signal can be accurately represented as a linear combination of very few measurement signals, often called atoms, in a given dictionary. This discovery has soon been employed to the field of pattern recognition and more recently, given rise to a newly developed unsupervised feature extraction method named sparsity preserving projections (SPP), which aims at seeking a linear embedded space where the sparse reconstructive relations among the data in the dictionary can be preserved. However, SPP is non-orthogonal and still has some space for improvement. Specially, by taking into consideration the preservation of some neat property of a dictionary, this paper presents an orthogonal sparsity preserving projections (OSPP). OSPP iteratively calculate a projective vector which can preserve the sparse reconstructive relations as SPP dose, and at the same enforcing it to be orthogonal to all previously obtained vectors. Empirical study shows that OSPP has more powerful sparsity preserving ability than SPP and hence is expected to have better classification performance, since sparsity is potentially related to discrimination. Experiments on the public Yale face databases validate the effectiveness of OSPP, as compared with several representative unsupervised feature extraction methods.

[1]  Yuanqing Li,et al.  Analysis of Sparse Representation and Blind Source Separation , 2004, Neural Computation.

[2]  Yuxiao Hu,et al.  Face recognition using Laplacianfaces , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  Bruno A. Olshausen,et al.  Learning Sparse Image Codes using a Wavelet Pyramid Architecture , 2000, NIPS.

[4]  E.J. Candes Compressive Sampling , 2022 .

[5]  Joel A. Tropp,et al.  Greed is good: algorithmic results for sparse approximation , 2004, IEEE Transactions on Information Theory.

[6]  L. Duchene,et al.  An Optimal Transformation for Discriminant and Principal Component Analysis , 1988, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  Yuan Yan Tang,et al.  Improving the discriminant ability of local margin based learning method by incorporating the global between-class separability criterion , 2009, Neurocomputing.

[8]  Ke Huang,et al.  Sparse Representation for Signal Classification , 2006, NIPS.

[9]  Michael Elad,et al.  Submitted to Ieee Transactions on Image Processing Image Decomposition via the Combination of Sparse Representations and a Variational Approach , 2022 .

[10]  M. Turk,et al.  Eigenfaces for Recognition , 1991, Journal of Cognitive Neuroscience.

[11]  Allen Y. Yang,et al.  Feature Selection in Face Recognition: A Sparse Representation Perspective , 2007 .

[12]  Yuan Yan Tang,et al.  Incremental Embedding and Learning in the Local Discriminant Subspace With Application to Face Recognition , 2010, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[13]  Allen Y. Yang,et al.  Robust Face Recognition via Sparse Representation , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[14]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[15]  Michael Elad,et al.  Image Denoising Via Learned Dictionaries and Sparse representation , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[16]  Xiaoming Huo,et al.  Uncertainty principles and ideal atomic decomposition , 2001, IEEE Trans. Inf. Theory.

[17]  Xiaoyang Tan,et al.  Pattern Recognition , 2016, Communications in Computer and Information Science.