Employing deep learning and sparse representation for data classification

Selecting a proper set of features with the best discrimination is always a challenge in classification. In this paper we propose a method, named GLLC (General Locally Linear Combination), to extract features using a deep autoencoder and reconstruct a sample based on other samples in a low dimensional space, then the class with minimum reconstruction error is selected as the winner. Extracting features along with the discrimination characteristic of the sparse model can create a robust classifier that shows simultaneous reduction of samples and features. Although the main application of GLLC is in the visual classification and face recognition, it can be used in other applications. We conduct extensive experiments to demonstrate that the proposed algorithm gain high accuracy on various datasets and outperforms the state-of-the-art methods.

[1]  Y. Nesterov Gradient methods for minimizing composite objective function , 2007 .

[2]  Ran He,et al.  A Regularized Correntropy Framework for Robust Pattern Recognition , 2011, Neural Computation.

[3]  Yoshua. Bengio,et al.  Learning Deep Architectures for AI , 2007, Found. Trends Mach. Learn..

[4]  Rajesh P. N. Rao,et al.  Probabilistic Models of the Brain: Perception and Neural Function , 2002 .

[5]  Allen Y. Yang,et al.  Robust Face Recognition via Sparse Representation , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  David G. Lowe,et al.  Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[7]  Yoshua Bengio,et al.  On the Expressive Power of Deep Architectures , 2011, ALT.

[8]  Michael A. Saunders,et al.  Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..

[9]  Cordelia Schmid,et al.  Semi-Local Affine Parts for Object Recognition , 2004, BMVC.

[10]  David J. Field,et al.  Sparse coding with an overcomplete basis set: A strategy employed by V1? , 1997, Vision Research.

[11]  Rajat Raina,et al.  Efficient sparse coding algorithms , 2006, NIPS.

[12]  Stephen J. Wright Coordinate descent algorithms , 2015, Mathematical Programming.

[13]  Yuxiao Hu,et al.  Learning a Spatially Smooth Subspace for Face Recognition , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[14]  Avinash C. Kak,et al.  PCA versus LDA , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[15]  Michel Verleysen,et al.  Nonlinear Dimensionality Reduction , 2021, Computer Vision.

[16]  Emmanuel J. Candès,et al.  NESTA: A Fast and Accurate First-Order Method for Sparse Recovery , 2009, SIAM J. Imaging Sci..

[17]  Margit Antal,et al.  Keystroke Dynamics on Android Platform , 2015 .

[18]  Kurt Hornik,et al.  Neural networks and principal component analysis: Learning from examples without local minima , 1989, Neural Networks.

[19]  Ali A. Ghorbani,et al.  Towards effective feature selection in machine learning-based botnet detection approaches , 2014, 2014 IEEE Conference on Communications and Network Security.

[20]  Yanjun Qi,et al.  Unsupervised Feature Learning by Deep Sparse Coding , 2013, SDM.

[21]  Matti Pietikäinen,et al.  Face Description with Local Binary Patterns: Application to Face Recognition , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[22]  Bill Triggs,et al.  Histograms of oriented gradients for human detection , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).