Connecting the Dots: Image Classification via Sparse Representation from a Constrained Subspace Perspective

We consider the problem of classifier design via sparse representation based on a constrained subspace model. We argue that the data points in the linear span of the training samples should be constrained in order to yield a more accurate approximation to the corresponding data manifold. For this purpose, the constrained set of data points is formulated as a union of affine subspaces in the form of affine hulls spanned by training samples. We argue that the intrinsic dimension of the affine subspaces should be equal to that of data manifold. Thus, a classifier based on this model has a high classification accuracy similar to that of the conceptual NM (Nearest Manifold) classifier. Based on this model, we connect the dots of some classical classifiers including NN (Nearest Neighbor), NFL (Nearest Feature Line), NS (Nearest subspace) and the recently emerged state-of-the-art SRC (Sparse Representation Classifiers) and interpret the mechanism of SRC and Yang's variant of the SRC using the constrained subspace perspective. Experiments on the Extended Yale B database for image classification corroborate our claims and demonstrate the possibility of a proposed classifier called NCSC-CSR which has higher classification accuracy and robustness. Keywords—sparse representation; constrained subspace; manifold approximation

[1]  Xin Liu,et al.  Image recognition via two-dimensional random projection and nearest constrained subspace , 2014, J. Vis. Commun. Image Represent..

[2]  Anders P. Eriksson,et al.  Is face recognition really a Compressive Sensing problem? , 2011, CVPR 2011.

[3]  Michael Elad,et al.  On the Role of Sparse and Redundant Representations in Image Processing , 2010, Proceedings of the IEEE.

[4]  Jian Yang,et al.  Beyond sparsity: The role of L1-optimizer in pattern classification , 2012, Pattern Recognit..

[5]  James S. Duncan,et al.  Contour tracking in echocardiographic sequences via sparse representation and dictionary learning , 2014, Medical Image Anal..

[6]  Zhoufeng Liu,et al.  Intrinsic dimension estimation via nearest constrained subspace classifier , 2014, Pattern Recognit..

[7]  Stan Z. Li,et al.  Face recognition using the nearest feature line method , 1999, IEEE Trans. Neural Networks.

[8]  Yong Yu,et al.  Robust Recovery of Subspace Structures by Low-Rank Representation , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  E.J. Candes,et al.  An Introduction To Compressive Sampling , 2008, IEEE Signal Processing Magazine.

[10]  Bo Zhang,et al.  Intrinsic dimension estimation of manifolds by incising balls , 2009, Pattern Recognit..

[11]  D. Donoho For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution , 2006 .

[12]  Robert S. Bennett,et al.  The intrinsic dimensionality of signal collections , 1969, IEEE Trans. Inf. Theory.

[13]  Anil K. Jain,et al.  Handbook of Face Recognition, 2nd Edition , 2011 .

[14]  Lei Zhang,et al.  Sparse representation or collaborative representation: Which helps face recognition? , 2011, 2011 International Conference on Computer Vision.

[15]  Hossein Mobahi,et al.  Toward a Practical Face Recognition System: Robust Alignment and Illumination by Sparse Representation , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[16]  Gabriel Peyré,et al.  Manifold models for signals and images , 2009, Comput. Vis. Image Underst..

[17]  Yaozong Gao,et al.  Integration of sparse multi-modality representation and anatomical constraint for isointense infant brain MR image segmentation , 2014, NeuroImage.

[18]  Allen Y. Yang,et al.  Robust Face Recognition via Sparse Representation , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[19]  Peter J. Bickel,et al.  Maximum Likelihood Estimation of Intrinsic Dimension , 2004, NIPS.

[20]  Xiaoyang Tan,et al.  Pattern Recognition , 2016, Communications in Computer and Information Science.