Optimal sample complexity for stable matrix recovery

Tremendous efforts have been made to study the theoretical and algorithmic aspects of sparse recovery and low-rank matrix recovery. This paper establishes (near) optimal sample complexities for stable matrix recovery without constants or log factors. We treat sparsity, low-rankness, and other parsimonious structures within the same framework: constraint sets that have small covering numbers or Minkowski dimensions, which include notoriously challenging cases such as simultaneously sparse and low-rank matrices. We consider three types of random measurement matrices (unstructured, rank-1, and symmetric rank-1 matrices), following probability distributions that satisfy some mild conditions. In all these cases, we prove a fundamental achievability result - the recovery of matrices with parsimonious structures, using an optimal (or near optimal) number of measurements, is stable with high probability.

[1]  R. Vershynin Estimation in High Dimensions: A Geometric Perspective , 2014, 1405.5103.

[2]  Joel A. Tropp,et al.  Living on the edge: phase transitions in convex programs with random data , 2013, 1303.6672.

[3]  Milton Abramowitz,et al.  Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables , 1964 .

[4]  Gitta Kutyniok,et al.  1 . 2 Sparsity : A Reasonable Assumption ? , 2012 .

[5]  Yanjun Li,et al.  Identifiability in Bilinear Inverse Problems With Applications to Subspace or Sparsity-Constrained Blind Gain and Phase Calibration , 2017, IEEE Transactions on Information Theory.

[6]  Yonina C. Eldar,et al.  Uniqueness conditions for low-rank matrix recovery , 2011, Optical Engineering + Applications.

[7]  Galen Reeves,et al.  The fundamental limits of stable recovery in compressed sensing , 2014, 2014 IEEE International Symposium on Information Theory.

[8]  D. Pollard Empirical Processes: Theory and Applications , 1990 .

[9]  Yanjun Li,et al.  Optimal Sample Complexity for Blind Gain and Phase Calibration , 2015, IEEE Transactions on Signal Processing.

[10]  Kenneth Falconer,et al.  Fractal Geometry: Mathematical Foundations and Applications , 1990 .

[11]  Yanjun Li,et al.  Identifiability in Blind Deconvolution With Subspace or Sparsity Constraints , 2015, IEEE Transactions on Information Theory.

[12]  Pablo A. Parrilo,et al.  The Convex Geometry of Linear Inverse Problems , 2010, Foundations of Computational Mathematics.

[13]  Yanjun Li,et al.  Blind Recovery of Sparse Signals From Subsampled Convolution , 2015, IEEE Transactions on Information Theory.

[14]  Joseph Nzabanita,et al.  Bilinear and Trilinear Regression Models with Structured Covariance Matrices , 2015 .

[15]  Sergio Verdú,et al.  Optimal Phase Transitions in Compressed Sensing , 2011, IEEE Transactions on Information Theory.

[16]  Yoram Bresler,et al.  Near Optimal Compressed Sensing of Sparse Rank-One Matrices via Sparse Power Factorization , 2013, ArXiv.

[17]  Yonina C. Eldar,et al.  Simultaneously Structured Models With Application to Sparse and Low-Rank Matrices , 2012, IEEE Transactions on Information Theory.

[18]  Justin K. Romberg,et al.  An Overview of Low-Rank Matrix Recovery From Incomplete Observations , 2016, IEEE Journal of Selected Topics in Signal Processing.

[19]  Justin K. Romberg,et al.  Sketching for simultaneously sparse and low-rank covariance matrices , 2015, 2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP).

[20]  Yoram Bresler,et al.  Near-Optimal Compressed Sensing of a Class of Sparse Low-Rank Matrices Via Sparse Power Factorization , 2013, IEEE Transactions on Information Theory.

[21]  Emmanuel J. Candès,et al.  PhaseLift: Exact and Stable Signal Recovery from Magnitude Measurements via Convex Programming , 2011, ArXiv.

[22]  Erwin Riegler,et al.  Almost lossless analog signal separation , 2013, 2013 IEEE International Symposium on Information Theory.

[23]  Michael Elad,et al.  Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization , 2003, Proceedings of the National Academy of Sciences of the United States of America.

[24]  Pablo A. Parrilo,et al.  Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization , 2007, SIAM Rev..

[25]  R. Stephenson A and V , 1962, The British journal of ophthalmology.

[26]  Emmanuel J. Candès,et al.  Decoding by linear programming , 2005, IEEE Transactions on Information Theory.

[27]  E. Candès,et al.  Stable signal recovery from incomplete and inaccurate measurements , 2005, math/0503066.

[28]  Justin K. Romberg,et al.  Blind Deconvolution Using Convex Programming , 2012, IEEE Transactions on Information Theory.

[29]  John M. Lee Introduction to Smooth Manifolds , 2002 .

[30]  Yanjun Li,et al.  Identifiability and Stability in Blind Deconvolution Under Minimal Assumptions , 2015, IEEE Transactions on Information Theory.

[31]  Emmanuel J. Candès,et al.  Exact Matrix Completion via Convex Optimization , 2008, Found. Comput. Math..

[32]  Emmanuel J. Candès,et al.  Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements , 2011, IEEE Transactions on Information Theory.

[33]  Erwin Riegler,et al.  Information-theoretic limits of matrix completion , 2015, 2015 IEEE International Symposium on Information Theory (ISIT).

[34]  Anru Zhang,et al.  ROP: Matrix Recovery via Rank-One Projections , 2013, ArXiv.