ℓ2/ℓ2-Foreach Sparse Recovery with Low Risk

In this paper, we consider the "foreach" sparse recovery problem with failure probability p. The goal of the problem is to design a distribution over m ×N matrices Φ and a decoding algorithm A such that for every x∈ℝN, we have with probability at least 1−p$$\|\mathbf{x}-A(\Phi\mathbf{x})\|_2\leqslant C\|\mathbf{x}-\mathbf{x}_k\|_2,$$ where xk is the best k-sparse approximation of x. Our two main results are: (1) We prove a lower bound on m, the number measurements, of Ω(klog(n/k)+log(1/p)) for $2^{-\Theta(N)}\leqslant p <1$. Cohen, Dahmen, and DeVore [4] prove that this bound is tight. (2) We prove nearly matching upper bounds that also admit sub-linear time decoding. Previous such results were obtained only when p=Ω(1). One corollary of our result is an an extension of Gilbert et al. [6] results for information-theoretically bounded adversaries. $$|\mathbf{x}-A(\Phi\mathbf{x})\|_2\leqslant C\|\mathbf{x}-\mathbf{x}_k\|_2,$$ where xk is the best k-sparse approximation of x. Our two main results are: (1) We prove a lower bound on m, the number measurements, of Ω(klog(n/k)+log(1/p)) for $2^{-\Theta(N)}\leqslant p <1$. Cohen, Dahmen, and DeVore [4] prove that this bound is tight. (2) We prove nearly matching upper bounds that also admit sub-linear time decoding. Previous such results were obtained only when p=Ω(1). One corollary of our result is an an extension of Gilbert et al. [6] results for information-theoretically bounded adversaries. $$|\mathbf{x}-A(\Phi\mathbf{x})\|_2\leqslant C\|\mathbf{x}-\mathbf{x}_k\|_2,$$ where xk is the best k-sparse approximation of x. Our two main results are: (1) We prove a lower bound on m, the number measurements, of Ω(klog(n/k)+log(1/p)) for $2^{-\Theta(N)}\leqslant p <1$. Cohen, Dahmen, and DeVore [4] prove that this bound is tight. (2) We prove nearly matching upper bounds that also admit sub-linear time decoding. Previous such results were obtained only when p=Ω(1). One corollary of our result is an an extension of Gilbert et al. [6] results for information-theoretically bounded adversaries. $$|\mathbf{x}-A(\Phi\mathbf{x})\|_2\leqslant C\|\mathbf{x}-\mathbf{x}_k\|_2,$$ where xk is the best k-sparse approximation of x. Our two main results are: (1) We prove a lower bound on m, the number measurements, of Ω(klog(n/k)+log(1/p)) for $2^{-\Theta(N)}\leqslant p <1$. Cohen, Dahmen, and DeVore [4] prove that this bound is tight. (2) We prove nearly matching upper bounds that also admit sub-linear time decoding. Previous such results were obtained only when p=Ω(1). One corollary of our result is an an extension of Gilbert et al. [6] results for information-theoretically bounded adversaries.

[1]  Ely Porat,et al.  Approximate Sparse Recovery: Optimizing Time and Measurements , 2012, SIAM J. Comput..

[2]  R. DeVore,et al.  Compressed sensing and best k-term approximation , 2008 .

[3]  Emmanuel J. Candès,et al.  Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? , 2004, IEEE Transactions on Information Theory.

[4]  David P. Woodruff,et al.  (1 + eps)-Approximate Sparse Recovery , 2011, 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science.

[5]  Madhu Sudan Essential Coding Theory Problem Set 2 , .

[6]  Moses Charikar,et al.  Finding frequent items in data streams , 2004, Theor. Comput. Sci..

[7]  Atri Rudra,et al.  Recovering simple signals , 2012, 2012 Information Theory and Applications Workshop.

[8]  Aravind Srinivasan,et al.  Chernoff-Hoeffding bounds for applications with limited independence , 1995, SODA '93.

[9]  Atri Rudra,et al.  Efficiently Decodable Compressed Sensing by List-Recoverable Codes and Recursion , 2012, STACS.

[10]  April Rasala Lehman,et al.  Network coding: does the model need tuning? , 2005, SODA '05.

[11]  Ely Porat,et al.  Sublinear time, measurement-optimal, sparse recovery for all , 2012, SODA.

[12]  M. Sion On general minimax theorems , 1958 .

[13]  Richard J. Lipton,et al.  A New Approach To Information Theory , 1994, STACS.

[14]  K. Ball An Elementary Introduction to Modern Convex Geometry , 1997 .

[15]  H. Whitney,et al.  An inequality related to the isoperimetric inequality , 1949 .

[16]  Piotr Indyk,et al.  Sparse Recovery Using Sparse Matrices , 2010, Proceedings of the IEEE.

[17]  Naftali Tishby,et al.  The information bottleneck method , 2000, ArXiv.

[18]  David P. Woodruff,et al.  ( 1 + )-approximate Sparse Recovery , 2011 .

[19]  Prakash Narayan,et al.  Reliable Communication Under Channel Uncertainty , 1998, IEEE Trans. Inf. Theory.

[20]  Venkatesan Guruswami,et al.  Explicit Codes Achieving List Decoding Capacity: Error-Correction With Optimal Redundancy , 2005, IEEE Transactions on Information Theory.

[21]  A. Rudra,et al.  List decoding and property testing of error-correcting codes , 2007 .

[22]  Martin Vetterli,et al.  Compressive Sampling [From the Guest Editors] , 2008, IEEE Signal Processing Magazine.

[23]  Eric Price,et al.  Improved Concentration Bounds for Count-Sketch , 2012, SODA.

[24]  Dror Irony,et al.  Communication lower bounds for distributed-memory matrix multiplication , 2004, J. Parallel Distributed Comput..

[25]  Venkatesan Guruswami,et al.  Improved decoding of Reed-Solomon and algebraic-geometry codes , 1999, IEEE Trans. Inf. Theory.