For-All Sparse Recovery in Near-Optimal Time

An approximate sparse recovery system in ℓ1 norm consists of parameters k, ε, N; an m-by-N measurement Φ; and a recovery algorithm R. Given a vector, x, the system approximates x by xˆ = R(Φ x), which must satisfy ‖ xˆ-x‖1 ≤ (1+ε)‖ x - xk‖1. We consider the “for all” model, in which a single matrix Φ, possibly “constructed” non-explicitly using the probabilistic method, is used for all signals x. The best existing sublinear algorithm by Porat and Strauss [2012] uses O(ε−3klog (N/k)) measurements and runs in time O(k1 − αNα) for any constant α > 0. In this article, we improve the number of measurements to O(ε − 2klog (N/k)), matching the best existing upper bound (attained by super-linear algorithms), and the runtime to O(k1+βpoly(log N,1/ε)), with a modest restriction that k ⩽ N1 − α and ε ⩽ (log k/log N)γ for any constants α, β, γ > 0. When k ⩽ log cN for some c > 0, the runtime is reduced to O(kpoly(N,1/ε)). With no restrictions on ε, we have an approximation recovery system with m = O(k/εlog (N/k)((log N/log k)γ + 1/ε)) measurements. The overall architecture of this algorithm is similar to that of Porat and Strauss [2012] in that we repeatedly use a weak recovery system (with varying parameters) to obtain a top-level recovery algorithm. The weak recovery system consists of a two-layer hashing procedure (or with two unbalanced expanders for a deterministic algorithm). The algorithmic innovation is a novel encoding procedure that is reminiscent of network coding and that reflects the structure of the hashing stages. The idea is to encode the signal position index i by associating it with a unique message mi, which will be encoded to a longer message m′i (in contrast to Porat and Strauss [2012] in which the encoding is simply the identity). Portions of the message m′i correspond to repetitions of the hashing, and we use a regular expander graph to encode the linkages among these portions. The decoding or recovery algorithm consists of recovering the portions of the longer messages m′i and then decoding to the original messages mi, all the while ensuring that corruptions can be detected and/or corrected. The recovery algorithm is similar to list recovery introduced in Indyk et al. [2010] and used in Gilbert et al. [2013]. In our algorithm, the messages {mi} are independent of the hashing, which enables us to obtain a better result.

[1]  David P. Woodruff,et al.  On Deterministic Sketching and Streaming for Sparse Recovery and Norm Estimation , 2012, APPROX-RANDOM.

[2]  CormodeGraham,et al.  Methods for finding frequent items in data streams , 2010, VLDB 2010.

[3]  Ting Sun,et al.  Single-pixel imaging via compressive sampling , 2008, IEEE Signal Process. Mag..

[4]  R. DeVore,et al.  Compressed sensing and best k-term approximation , 2008 .

[5]  Enkatesan G Uruswami Unbalanced expanders and randomness extractors from Parvaresh-Vardy codes , 2008 .

[6]  Atri Rudra,et al.  Efficiently decodable non-adaptive group testing , 2010, SODA '10.

[7]  Piotr Indyk,et al.  Combining geometry and combinatorics: A unified approach to sparse signal recovery , 2008, 2008 46th Annual Allerton Conference on Communication, Control, and Computing.

[8]  Graham Cormode,et al.  Combinatorial Algorithms for Compressed Sensing , 2006, 2006 40th Annual Conference on Information Sciences and Systems.

[9]  Ely Porat,et al.  Sublinear time, measurement-optimal, sparse recovery for all , 2012, SODA.

[10]  Alexander Vardy,et al.  Correcting errors beyond the Guruswami-Sudan radius in polynomial time , 2005, 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS'05).

[11]  Eli Upfal Tolerating linear number of faults in networks of bounded degree , 1992, PODC '92.

[12]  Ely Porat,et al.  Approximate sparse recovery: optimizing time and measurements , 2009, STOC '10.

[13]  Piotr Indyk,et al.  Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform , 2015, SODA.

[14]  Moses Charikar,et al.  Finding frequent items in data streams , 2002, Theor. Comput. Sci..

[15]  Andrei Z. Broder,et al.  On the second eigenvalue of random regular graphs , 1987, 28th Annual Symposium on Foundations of Computer Science (sfcs 1987).

[16]  Ely Porat,et al.  From coding theory to efficient pattern matching , 2009, SODA.

[17]  D. Donoho,et al.  Sparse MRI: The application of compressed sensing for rapid MR imaging , 2007, Magnetic resonance in medicine.

[18]  Amnon Ta-Shma,et al.  Extractor codes , 2001, IEEE Transactions on Information Theory.

[19]  Emmanuel J. Candès,et al.  Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information , 2004, IEEE Transactions on Information Theory.

[20]  Mahdi Cheraghchi,et al.  Noise-resilient group testing: Limitations and constructions , 2008, Discret. Appl. Math..

[21]  S. Frick,et al.  Compressed Sensing , 2014, Computer Vision, A Reference Guide.

[22]  Atri Rudra,et al.  ℓ2/ℓ2-Foreach Sparse Recovery with Low Risk , 2013, ICALP.

[23]  Joel A. Tropp,et al.  Algorithmic linear dimension reduction in the l_1 norm for sparse vectors , 2006, ArXiv.

[24]  R. Vershynin,et al.  One sketch for all: fast algorithms for compressed sensing , 2007, STOC '07.