On the Effective Measure of Dimension in the Analysis Cosparse Model

Many applications have benefited remarkably from low-dimensional models in the recent decade. The fact that many signals, though high dimensional, are intrinsically low dimensional has given the possibility to recover them stably from a relatively small number of their measurements. For example, in compressed sensing with the standard (synthesis) sparsity prior and in matrix completion, the number of measurements needed is proportional (up to a logarithmic factor) to the signal's manifold dimension. Recently, a new natural low-dimensional signal model has been proposed: the cosparse analysis prior. In the noiseless case, it is possible to recover signals from this model, using a combinatorial search, from a number of measurements proportional to the signal's manifold dimension. However, if we ask for stability to noise or an efficient (polynomial complexity) solver, all the existing results demand a number of measurements, which is far removed from the manifold dimension, sometimes far greater. Thus, it is natural to ask whether this gap is a deficiency of the theory and the solvers, or if there exists a real barrier in recovering the cosparse signals by relying only on their manifold dimension. Is there an algorithm which, in the presence of noise, can accurately recover a cosparse signal from a number of measurements proportional to the manifold dimension? In this paper, we prove that there is no such algorithm. Furthermore, we show through the numerical simulations that even in the noiseless case convex relaxations fail when the number of measurements is comparable with the manifold dimension. This gives a practical counterexample to the growing literature on the compressed acquisition of signals based on manifold dimension.

[1]  W. Hoeffding Probability Inequalities for sums of Bounded Random Variables , 1963 .

[2]  R. Serfling Probability Inequalities for the Sum in Sampling without Replacement , 1974 .

[3]  L. L. Cam,et al.  Asymptotic Methods In Statistical Decision Theory , 1986 .

[4]  G. Pisier The volume of convex bodies and Banach space geometry: Volume Ratio , 1989 .

[5]  G. Pisier The volume of convex bodies and Banach space geometry , 1989 .

[6]  M. Talagrand,et al.  Probability in Banach Spaces: Isoperimetry and Processes , 1991 .

[7]  M. Talagrand,et al.  Probability in Banach spaces , 1991 .

[8]  Stéphane Mallat,et al.  Matching pursuits with time-frequency dictionaries , 1993, IEEE Trans. Signal Process..

[9]  Balas K. Natarajan,et al.  Sparse Approximate Solutions to Linear Systems , 1995, SIAM J. Comput..

[10]  Michael A. Saunders,et al.  Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..

[11]  John M. Lee Introduction to Smooth Manifolds , 2002 .

[12]  Michael Elad,et al.  Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization , 2003, Proceedings of the National Academy of Sciences of the United States of America.

[13]  Emmanuel J. Candès,et al.  Decoding by linear programming , 2005, IEEE Transactions on Information Theory.

[14]  Michael Elad,et al.  Stable recovery of sparse overcomplete representations in the presence of noise , 2006, IEEE Transactions on Information Theory.

[15]  Emmanuel J. Candès,et al.  Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? , 2004, IEEE Transactions on Information Theory.

[16]  M. Rudelson,et al.  Sparse reconstruction by convex relaxation: Fourier and Gaussian measurements , 2006, 2006 40th Annual Conference on Information Sciences and Systems.

[17]  Michael Elad,et al.  Analysis versus synthesis in signal priors , 2006, 2006 14th European Signal Processing Conference.

[18]  Terence Tao,et al.  The Dantzig selector: Statistical estimation when P is much larger than n , 2005, math/0506081.

[19]  Mike E. Davies,et al.  Iterative Hard Thresholding for Compressed Sensing , 2008, ArXiv.

[20]  J. Romberg,et al.  Imaging via Compressive Sampling , 2008, IEEE Signal Processing Magazine.

[21]  Minh N. Do,et al.  A Theory for Sampling Signals from a Union of Subspaces , 2022 .

[22]  Richard G. Baraniuk,et al.  Random Projections of Smooth Manifolds , 2009, Found. Comput. Math..

[23]  Michael Elad,et al.  From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images , 2009, SIAM Rev..

[24]  Mike E. Davies,et al.  Sampling Theorems for Signals From the Union of Finite-Dimensional Linear Subspaces , 2009, IEEE Transactions on Information Theory.

[25]  P. Bickel,et al.  SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR , 2008, 0801.1095.

[26]  Olgica Milenkovic,et al.  Subspace Pursuit for Compressive Sensing Signal Reconstruction , 2008, IEEE Transactions on Information Theory.

[27]  Michael Elad,et al.  Sparse and Redundant Representations - From Theory to Applications in Signal and Image Processing , 2010 .

[28]  M. Wakin Manifold-Based Signal Recovery and Parameter Estimation from Compressive Measurements , 2010, 1002.1247.

[29]  Deanna Needell,et al.  Signal Recovery From Incomplete and Inaccurate Measurements Via Regularized Orthogonal Matching Pursuit , 2007, IEEE Journal of Selected Topics in Signal Processing.

[30]  Yonina C. Eldar,et al.  Compressed Sensing with Coherent and Redundant Dictionaries , 2010, ArXiv.

[31]  Emmanuel J. Candès,et al.  Matrix Completion With Noise , 2009, Proceedings of the IEEE.

[32]  Deanna Needell,et al.  CoSaMP: Iterative signal recovery from incomplete and inaccurate samples , 2008, ArXiv.

[33]  T. Blumensath,et al.  Theory and Applications , 2011 .

[34]  Michael B. Wakin,et al.  Stable manifold embeddings with operators satisfying the Restricted Isometry Property , 2011, 2011 45th Annual Conference on Information Sciences and Systems.

[35]  Michael Elad,et al.  The Cosparse Analysis Model and Algorithms , 2011, ArXiv.

[36]  Yonina C. Eldar,et al.  Uniqueness conditions for low-rank matrix recovery , 2011, Optical Engineering + Applications.

[37]  Simon Foucart,et al.  Hard Thresholding Pursuit: An Algorithm for Compressive Sensing , 2011, SIAM J. Numer. Anal..

[38]  Aditya Guntuboyina Lower Bounds for the Minimax Risk Using $f$-Divergences, and Applications , 2011, IEEE Transactions on Information Theory.

[39]  Emmanuel J. Candès,et al.  Exact Matrix Completion via Convex Optimization , 2008, Found. Comput. Math..

[40]  Yonina C. Eldar,et al.  Uniqueness conditions for low-rank matrix recovery , 2011, SPIE Proceedings.

[41]  Yulong Liu,et al.  Compressed Sensing With General Frames via Optimal-Dual-Based $\ell _{1}$-Analysis , 2012, IEEE Transactions on Information Theory.

[42]  Michael Elad,et al.  RIP-Based Near-Oracle Performance Guarantees for SP, CoSaMP, and IHT , 2012, IEEE Transactions on Signal Processing.

[43]  R. M. Willett,et al.  Compressed sensing for practical optical imaging systems: A tutorial , 2011, IEEE Photonics Conference 2012.

[44]  Holger Rauhut,et al.  A Mathematical Introduction to Compressive Sensing , 2013, Applied and Numerical Harmonic Analysis.

[45]  Michael Elad,et al.  Can we allow linear dependencies in the dictionary in the sparse synthesis framework? , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[46]  Michael B. Wakin,et al.  New Analysis of Manifold Embeddings and Signal Recovery from Compressive Measurements , 2013, ArXiv.

[47]  Deanna Needell,et al.  Stable Image Reconstruction Using Total Variation Minimization , 2012, SIAM J. Imaging Sci..

[48]  Holger Rauhut,et al.  Analysis ℓ1-recovery with Frames and Gaussian Measurements , 2015, ArXiv.

[49]  Holger Rauhut,et al.  Analysis $\ell_1$-recovery with frames and Gaussian measurements , 2013, 1306.1356.

[50]  Y. Plan,et al.  High-dimensional estimation with geometric constraints , 2014, 1404.3749.

[51]  M. Davies,et al.  Greedy-like algorithms for the cosparse analysis model , 2012, 1207.2456.

[52]  R. Vershynin Estimation in High Dimensions: A Geometric Perspective , 2014, 1405.5103.

[53]  慧 廣瀬 A Mathematical Introduction to Compressive Sensing , 2015 .

[54]  S. Frick,et al.  Compressed Sensing , 2014, Computer Vision, A Reference Guide.

[55]  Raja Giryes,et al.  A greedy algorithm for the analysis transform domain , 2013, Neurocomputing.