Exact Recovery of Sparsely-Used Dictionaries

We consider the problem of learning sparsely used dictionaries with an arbitrary square dictionary and a random, sparse coefficient matrix. We prove that O(n log n) samples are sufficient to uniquely determine the coefficient matrix. Based on this proof, we design a polynomial-time algorithm, called Exact Recovery of Sparsely-Used Dictionaries (ERSpUD), and prove that it probably recovers the dictionary and coefficient matrix when the coefficient matrix is sufficiently sparse. Simulation results show that ER-SpUD reveals the true dictionary as well as the coefficients with probability higher than many state-of-the-art algorithms.

[1]  Jirí Matousek,et al.  On variants of the Johnson–Lindenstrauss lemma , 2008, Random Struct. Algorithms.

[2]  Jean Ponce,et al.  Convex Sparse Matrix Factorizations , 2008, ArXiv.

[3]  Thomas S. Huang,et al.  Image Super-Resolution Via Sparse Representation , 2010, IEEE Transactions on Image Processing.

[4]  Mark D. Plumbley Dictionary Learning for L1-Exact Sparse Coding , 2007, ICA.

[5]  Michael Elad,et al.  From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images , 2009, SIAM Rev..

[6]  Rémi Gribonval,et al.  An L1 criterion for dictionary learning by subspace identification , 2010, 2010 IEEE International Conference on Acoustics, Speech and Signal Processing.

[7]  William Feller,et al.  An Introduction to Probability Theory and Its Applications , 1967 .

[8]  M. Elad,et al.  $rm K$-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation , 2006, IEEE Transactions on Signal Processing.

[9]  Pierre Comon Independent component analysis - a new concept? signal processing , 1994 .

[10]  Michael Elad,et al.  Dictionaries for Sparse Representation Modeling , 2010, Proceedings of the IEEE.

[11]  Pierre Comon,et al.  Independent component analysis, A new concept? , 1994, Signal Process..

[12]  R. Stanley Enumerative Combinatorics: Volume 1 , 2011 .

[13]  Karin Schnass,et al.  Dictionary Identification—Sparse Matrix-Factorization via $\ell_1$ -Minimization , 2009, IEEE Transactions on Information Theory.

[14]  Huan Wang,et al.  Exact Recovery of Sparse-Used Dictionaries , 2013, IJCAI.

[15]  Barak A. Pearlmutter,et al.  Blind source separation by sparse decomposition , 2000, SPIE Defense + Commercial Sensing.

[16]  P. Erdös On a lemma of Littlewood and Offord , 1945 .

[17]  Shie Mannor,et al.  The Sample Complexity of Dictionary Learning , 2010, COLT.

[18]  Kjersti Engan,et al.  Method of optimal directions for frame design , 1999, 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No.99CH36258).

[19]  Fabian J. Theis,et al.  Sparse component analysis and blind source separation of underdetermined mixtures , 2005, IEEE Transactions on Neural Networks.

[20]  M. Zibulevsky BLIND SOURCE SEPARATION WITH RELATIVE NEWTON METHOD , 2003 .

[21]  Guillermo Sapiro,et al.  Online dictionary learning for sparse coding , 2009, ICML '09.

[22]  A. Bruckstein,et al.  On the uniqueness of overcomplete dictionaries, and a practical way to retrieve them , 2006 .

[23]  A. Bruckstein,et al.  K-SVD : An Algorithm for Designing of Overcomplete Dictionaries for Sparse Representation , 2005 .

[24]  David J. Field,et al.  Emergence of simple-cell receptive field properties by learning a sparse code for natural images , 1996, Nature.

[25]  Joseph F. Murray,et al.  Dictionary Learning Algorithms for Sparse Representation , 2003, Neural Computation.

[26]  Huan Wang,et al.  On the local correctness of ℓ1-minimization for dictionary learning , 2011, 2014 IEEE International Symposium on Information Theory.