Accurate Dictionary Learning with Direct Sparsity Control

Dictionary learning is a popular method for obtaining sparse linear representations for high dimensional data, with many applications in image classification, signal processing and machine learning. In this paper, we introduce a novel dictionary learning method based on a recent variable selection algorithm called Feature Selection with Annealing (FSA). Because FSA uses an $L_{0}$ constraint instead of the $L_{1}$ penalty, it does not introduce any bias in the coefficients and obtains a more accurate sparse representation. Furthermore, the $L_{0}$ constraint makes it easy to directly specify the desired sparsity level instead of indirectly through a $L_{1}$ penalty. Finally, experimental validation on real gray-scale images shows that the proposed method obtains higher accuracy and efficiency in dictionary learning compared to classical methods based on the $L_{1}$ penalty.

[1]  Baoxin Li,et al.  Discriminative K-SVD for dictionary learning in face recognition , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[2]  Allen Y. Yang,et al.  Fast ℓ1-minimization algorithms and an application in robust face recognition: A review , 2010, 2010 IEEE International Conference on Image Processing.

[3]  David Zhang,et al.  Fisher Discrimination Dictionary Learning for sparse representation , 2011, 2011 International Conference on Computer Vision.

[4]  Adrian Barbu,et al.  Feature Selection with Annealing for Computer Vision and Big Data Learning. , 2017, IEEE transactions on pattern analysis and machine intelligence.

[5]  Jianqing Fan,et al.  Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties , 2001 .

[6]  Qiuqi Ruan,et al.  Enhancing sparsity via ℓp (0, 2013, Neurocomputing.

[7]  David Zhang,et al.  A Survey of Sparse Representation: Algorithms and Applications , 2015, IEEE Access.

[8]  R. Tibshirani,et al.  Least angle regression , 2004, math/0406456.

[9]  Wenjiang J. Fu Penalized Regressions: The Bridge versus the Lasso , 1998 .

[10]  Michael Elad,et al.  Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization , 2003, Proceedings of the National Academy of Sciences of the United States of America.

[11]  Yann LeCun,et al.  The mnist database of handwritten digits , 2005 .

[12]  Guillermo Sapiro,et al.  Online dictionary learning for sparse coding , 2009, ICML '09.

[13]  Chao Zhang,et al.  A comparison of typical ℓp minimization algorithms , 2013, Neurocomputing.

[14]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[15]  Xuelong Li,et al.  Learning Hash Functions Using Sparse Reconstruction , 2014, ICIMCS '14.

[16]  David J. Field,et al.  Sparse coding with an overcomplete basis set: A strategy employed by V1? , 1997, Vision Research.

[17]  Rama Chellappa,et al.  Sparse Representations, Compressive Sensing and dictionaries for pattern recognition , 2011, The First Asian Conference on Pattern Recognition.