Near Optimal Compressed Sensing of Sparse Rank-One Matrices via Sparse Power Factorization

Compressed sensing of simultaneously sparse and rank-one matrices enables recovery of sparse signals from a few linear measurements of their bilinear form. One important question is how many measurements are needed for a stable reconstruction in the presence of measurement noise. Unlike the conventional compressed sensing for sparse vectors, where convex relaxation via the $\ell_1$-norm achieves near optimal performance, for compressed sensing of sparse and rank-one matrices, recently it has been shown by Oymak et al. that convex programmings using the nuclear norm and the mixed norm are highly suboptimal even in the noise-free scenario. We propose an alternating minimization algorithm called sparse power factorization (SPF) for compressed sensing of sparse rank-one matrices. Starting from a particular initialization, SPF achieves stable recovery and requires number of measurements within a logarithmic factor of the information-theoretic fundamental limit. For fast-decaying sparse signals, SPF starting from an initialization with low computational cost also achieves stable reconstruction with the same number of measurements. Numerical results show that SPF empirically outperforms the best known combinations of mixed norm and nuclear norm.

[1]  A. Nobel,et al.  Finding large average submatrices in high dimensional data , 2009, 0905.1682.

[2]  S. Szarek Metric Entropy of Homogeneous Spaces , 1997, math/9701213.

[3]  Philippe Rigollet,et al.  Complexity Theoretic Lower Bounds for Sparse Principal Component Detection , 2013, COLT.

[4]  Antonia Maria Tulino,et al.  Random Matrix Theory and Wireless Communications , 2004, Found. Trends Commun. Inf. Theory.

[5]  A. Robert Calderbank,et al.  Compressive blind source separation , 2010, 2010 IEEE International Conference on Image Processing.

[6]  Inderjit S. Dhillon,et al.  Guaranteed Rank Minimization via Singular Value Projection , 2009, NIPS.

[7]  Stéphane Mallat,et al.  A Wavelet Tour of Signal Processing - The Sparse Way, 3rd Edition , 2008 .

[8]  A. Bruckstein,et al.  K-SVD : An Algorithm for Designing of Overcomplete Dictionaries for Sparse Representation , 2005 .

[9]  A. Rényi On the dimension and entropy of probability distributions , 1959 .

[10]  Simon Foucart,et al.  Hard Thresholding Pursuit: An Algorithm for Compressive Sensing , 2011, SIAM J. Numer. Anal..

[11]  Toby Berger,et al.  Rate distortion theory : a mathematical basis for data compression , 1971 .

[12]  Yoram Bresler,et al.  Oblique Pursuits for Compressed Sensing , 2012, IEEE Transactions on Information Theory.

[13]  Xiao-Tong Yuan,et al.  Truncated power method for sparse eigenvalue problems , 2011, J. Mach. Learn. Res..

[14]  Sunav Choudhary,et al.  On identifiability in bilinear inverse problems , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[15]  B. Nadler,et al.  MINIMAX BOUNDS FOR SPARSE PCA WITH NOISY HIGH-DIMENSIONAL DATA. , 2012, Annals of statistics.

[16]  Peter Harremoës,et al.  Maximum Entropy on Compact Groups , 2006, 2006 IEEE International Symposium on Information Theory.

[17]  R. DeVore,et al.  A Simple Proof of the Restricted Isometry Property for Random Matrices , 2008 .

[18]  Yu. I. Ingster,et al.  Detection of a sparse submatrix of a high-dimensional noisy matrix , 2011, 1109.0898.

[19]  M. Elad,et al.  $rm K$-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation , 2006, IEEE Transactions on Signal Processing.

[20]  Sergio Verdú,et al.  Optimal Phase Transitions in Compressed Sensing , 2011, IEEE Transactions on Information Theory.

[21]  Rémi Gribonval,et al.  Double Sparsity: Towards Blind Estimation of Multiple Channels , 2010, LVA/ICA.

[22]  Prateek Jain,et al.  Low-rank matrix completion using alternating minimization , 2012, STOC '13.

[23]  Olgica Milenkovic,et al.  Subspace Pursuit for Compressive Sensing Signal Reconstruction , 2008, IEEE Transactions on Information Theory.

[24]  I. Johnstone,et al.  On Consistency and Sparsity for Principal Components Analysis in High Dimensions , 2009, Journal of the American Statistical Association.

[25]  Jing Lei,et al.  Minimax Rates of Estimation for Sparse PCA in High Dimensions , 2012, AISTATS.

[26]  Emmanuel J. Candès,et al.  Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements , 2011, IEEE Transactions on Information Theory.

[27]  Yoram Bresler,et al.  MR Image Reconstruction From Highly Undersampled k-Space Data by Dictionary Learning , 2011, IEEE Transactions on Medical Imaging.

[28]  Deanna Needell,et al.  CoSaMP: Iterative signal recovery from incomplete and inaccurate samples , 2008, ArXiv.

[29]  Emmanuel J. Candès,et al.  Decoding by linear programming , 2005, IEEE Transactions on Information Theory.

[30]  Justin K. Romberg,et al.  Blind Deconvolution Using Convex Programming , 2012, IEEE Transactions on Information Theory.

[31]  Yonina C. Eldar,et al.  Blind Compressed Sensing , 2010, IEEE Transactions on Information Theory.

[32]  Justin P. Haldar,et al.  Rank-Constrained Solutions to Linear Matrix Equations Using PowerFactorization , 2009, IEEE Signal Processing Letters.

[33]  Yonina C. Eldar,et al.  Simultaneously Structured Models With Application to Sparse and Low-Rank Matrices , 2012, IEEE Transactions on Information Theory.

[34]  L. Schumaker,et al.  Approximation theory XIII : San Antonio 2010 , 2012 .

[35]  P. Wedin Perturbation bounds in connection with singular value decomposition , 1972 .

[36]  Amir Dembo,et al.  The rate-distortion dimension of sets and measures , 1994, IEEE Trans. Inf. Theory.

[37]  T. Cai,et al.  Sparse PCA: Optimal rates and adaptive estimation , 2012, 1211.1309.

[38]  Yihong Wu,et al.  Computational Barriers in Minimax Submatrix Detection , 2013, ArXiv.

[39]  Zongming Ma Sparse Principal Component Analysis and Iterative Thresholding , 2011, 1112.2432.

[40]  K. Abed-Meraim,et al.  Blind SIMO channel identification using a sparsity criterion , 2008, 2008 IEEE 9th Workshop on Signal Processing Advances in Wireless Communications.

[41]  Yang Wang,et al.  Robust sparse phase retrieval made easy , 2014, 1410.5295.

[42]  Yoram Bresler,et al.  ADMiRA: Atomic Decomposition for Minimum Rank Approximation , 2009, IEEE Transactions on Information Theory.

[43]  Pablo A. Parrilo,et al.  Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization , 2007, SIAM Rev..

[44]  Yi-Kai Liu,et al.  Universal low-rank matrix recovery from Pauli measurements , 2011, NIPS.

[45]  Peter Harremoes,et al.  Information theory for angular data , 2010, 2010 IEEE Information Theory Workshop on Information Theory (ITW 2010, Cairo).

[46]  S. Foucart Sparse Recovery Algorithms: Sufficient Conditions in Terms of RestrictedIsometry Constants , 2012 .

[47]  M. Talagrand The Generic chaining : upper and lower bounds of stochastic processes , 2005 .