Sparse recovery from saturated measurements

A novel theory of sparse recovery is presented in order to bridge the standard compressive sensing framework and the one-bit compressive sensing framework. In the former setting, sparse vectors observed via few linear measurements can be reconstructed exactly. In the latter setting, the linear measurements are only available through their signs, so exact reconstruction of sparse vectors is replaced by estimation of their directions. In the hybrid setting introduced here, a linear measurement is conventionally acquired if is not too large in absolute value, but otherwise it is seen as saturated to plus-or-minus a given threshold. Intuition suggests that sparse vectors of small magnitude should be exactly recoverable, since saturation would not occur, and that sparse vectors of larger magnitude should be accessible though more than just their directions. The purpose of the article is to confirm this intuition and to justify rigorously the following informal statement: measuring at random with Gaussian vectors and reconstructing via an `1-minimization scheme, it is highly likely that all sparse vectors are faithfully estimated from their saturated measurements as long as the number of saturated measurements marginally exceeds the sparsity level. Faithful estimation means exact reconstruction in a small-magnitude regime and control of the relative reconstruction error in a larger-magnitude regime.

[1]  Richard G. Baraniuk,et al.  A simple proof that random matrices are democratic , 2009, ArXiv.

[2]  Richard G. Baraniuk,et al.  Democracy in Action: Quantization, Saturation, and Compressive Sensing , 2011 .

[3]  Laurent Jacques,et al.  Quantized Iterative Hard Thresholding: Bridging 1-bit and High-Resolution Quantized Compressed Sensing , 2013, ArXiv.

[4]  Rayan Saab,et al.  Sobolev Duals for Random Frames and ΣΔ Quantization of Compressed Sensing Measurements , 2013, Found. Comput. Math..

[5]  Richard G. Baraniuk,et al.  1-Bit compressive sensing , 2008, 2008 42nd Annual Conference on Information Sciences and Systems.

[6]  Emmanuel J. Candès,et al.  Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information , 2004, IEEE Transactions on Information Theory.

[7]  Michael Elad,et al.  Audio Inpainting , 2012, IEEE Transactions on Audio, Speech, and Language Processing.

[8]  Holger Rauhut,et al.  A Mathematical Introduction to Compressive Sensing , 2013, Applied and Numerical Harmonic Analysis.

[9]  Yaniv Plan,et al.  One‐Bit Compressed Sensing by Linear Programming , 2011, ArXiv.

[10]  Laurent Jacques,et al.  Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors , 2011, IEEE Transactions on Information Theory.

[11]  Laurent Jacques,et al.  Consistent iterative hard thresholding for signal declipping , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[12]  Michael B. Wakin,et al.  Recovering a Clipped Signal in Sparseland , 2011, ArXiv.

[13]  Yaniv Plan,et al.  Dimension Reduction by Random Hyperplane Tessellations , 2014, Discret. Comput. Geom..

[14]  Richard G. Baraniuk,et al.  One-Bit Compressive Sensing of Dictionary-Sparse Signals , 2016, ArXiv.

[15]  Dmitriy Bilyk,et al.  Random Tessellations, Restricted Isometric Embeddings, and One Bit Sensing , 2015, ArXiv.

[16]  Yaniv Plan,et al.  Robust 1-bit Compressed Sensing and Sparse Logistic Regression: A Convex Programming Approach , 2012, IEEE Transactions on Information Theory.

[17]  S. Frick,et al.  Compressed Sensing , 2014, Computer Vision, A Reference Guide.

[18]  Paris Smaragdis Dynamic Range Extension Using Interleaved Gains , 2009, IEEE Transactions on Audio, Speech, and Language Processing.

[19]  G. Schechtman Two observations regarding embedding subsets of Euclidean spaces in normed spaces , 2006 .

[20]  Yaniv Plan,et al.  One-bit compressed sensing with non-Gaussian measurements , 2012, ArXiv.

[21]  Ming-Jun Lai,et al.  Sparse recovery with pre-Gaussian random matrices , 2010 .