The Generalized Lasso for Sub-Gaussian Measurements With Dithered Quantization

In the problem of structured signal recovery from high-dimensional linear observations, it is commonly assumed that full-precision measurements are available. Under this assumption, the recovery performance of the popular Generalized Lasso (G-Lasso) is by now well-established. In this paper, we extend these types of results to the practically relevant settings with quantized measurements. We study two extremes of the quantization schemes, namely, uniform and one-bit quantization; the former imposes no limit on the number of quantization bits, while the second only allows for one bit. In the presence of a uniform dithering signal and when measurement vectors are sub-gaussian, we show that the same algorithm (i.e., the G-Lasso) has favorable recovery guarantees for both uniform and one-bit quantization schemes. Our theoretical results, shed light on the appropriate choice of the range of values of the dithering signal and accurately capture the error dependence on the problem parameters. For example, our error analysis shows that the G-Lasso with one-bit uniformly dithered measurements leads to only a logarithmic rate loss compared to the full-precision measurements.

[1]  Mihailo Stojnic,et al.  A framework to characterize performance of LASSO algorithms , 2013, ArXiv.

[2]  M. Talagrand The Generic chaining : upper and lower bounds of stochastic processes , 2005 .

[3]  Pradeep Ravikumar,et al.  Beyond Sub-Gaussian Measurements: High-Dimensional Structured Estimation with Sub-Exponential Designs , 2015, NIPS.

[4]  M. Stojnic Various thresholds for $\ell_1$-optimization in compressed sensing , 2009 .

[5]  M. Talagrand The Generic Chaining , 2005 .

[6]  Rayan Saab,et al.  One-Bit Compressive Sensing With Norm Estimation , 2014, IEEE Transactions on Information Theory.

[7]  M. Talagrand,et al.  Probability in Banach Spaces: Isoperimetry and Processes , 1991 .

[8]  Onkar Dabeer,et al.  Signal Parameter Estimation Using 1-Bit Dithered Quantization , 2006, IEEE Transactions on Information Theory.

[9]  Christos Thrampoulidis,et al.  The squared-error of generalized LASSO: A precise analysis , 2013, 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[10]  V. Koltchinskii,et al.  Bounding the smallest singular value of a random matrix without concentration , 2013, 1312.3580.

[11]  Yaniv Plan,et al.  One-bit compressed sensing with non-Gaussian measurements , 2012, ArXiv.

[12]  Shahar Mendelson,et al.  Learning without Concentration , 2014, COLT.

[13]  Samet Oymak,et al.  Learning Compact Neural Networks with Regularization , 2018, ICML.

[14]  Sjoerd Dirksen,et al.  Robust one-bit compressed sensing with non-Gaussian measurements , 2018, ArXiv.

[15]  Yaniv Plan,et al.  The Generalized Lasso With Non-Linear Observations , 2015, IEEE Transactions on Information Theory.

[16]  Mihailo Stojnic,et al.  Various thresholds for ℓ1-optimization in compressed sensing , 2009, ArXiv.

[17]  Sjoerd Dirksen,et al.  Non-Gaussian hyperplane tessellations and robust one-bit compressed sensing , 2018, Journal of the European Mathematical Society.

[18]  Christos Thrampoulidis,et al.  LASSO with Non-linear Measurements is Equivalent to One With Linear Measurements , 2015, NIPS.

[19]  Yaniv Plan,et al.  One‐Bit Compressed Sensing by Linear Programming , 2011, ArXiv.

[20]  Laurent Jacques,et al.  Quantized Compressive Sensing with RIP Matrices: The Benefit of Dithering , 2018, Information and Inference: A Journal of the IMA.

[21]  Sjoerd Dirksen,et al.  Tail bounds via generic chaining , 2013, ArXiv.

[22]  Andrea Montanari,et al.  The Noise-Sensitivity Phase Transition in Compressed Sensing , 2010, IEEE Transactions on Information Theory.

[23]  Sjoerd Dirksen,et al.  Robust one-bit compressed sensing with partial circulant matrices , 2018, The Annals of Applied Probability.

[24]  Richard G. Baraniuk,et al.  1-Bit compressive sensing , 2008, 2008 42nd Annual Conference on Information Sciences and Systems.

[25]  Pablo A. Parrilo,et al.  The Convex Geometry of Linear Inverse Problems , 2010, Foundations of Computational Mathematics.

[26]  R. Gray,et al.  Dithered Quantizers , 1993, Proceedings. 1991 IEEE International Symposium on Information Theory.

[27]  Martin Genzel,et al.  High-Dimensional Estimation of Structured Signals From Non-Linear Observations With General Convex Loss Functions , 2016, IEEE Transactions on Information Theory.

[28]  Peter Jung,et al.  Recovering Structured Data From Superimposed Non-Linear Measurements , 2017, IEEE Transactions on Information Theory.

[29]  Joel A. Tropp,et al.  Living on the edge: phase transitions in convex programs with random data , 2013, 1303.6672.

[30]  Richard G. Baraniuk,et al.  Exponential Decay of Reconstruction Error From Binary Measurements of Sparse Signals , 2014, IEEE Transactions on Information Theory.

[31]  Joel A. Tropp,et al.  Convex recovery of a structured signal from independent random linear measurements , 2014, ArXiv.

[32]  Laurent Jacques,et al.  Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors , 2011, IEEE Transactions on Information Theory.

[33]  Joel A. Tropp,et al.  Universality laws for randomized dimension reduction, with applications , 2015, ArXiv.

[34]  M. Rudelson,et al.  Sparse reconstruction by convex relaxation: Fourier and Gaussian measurements , 2006, 2006 40th Annual Conference on Information Sciences and Systems.

[35]  Y. Gordon On Milman's inequality and random subspaces which escape through a mesh in ℝ n , 1988 .

[36]  Samet Oymak,et al.  Fast and Reliable Parameter Estimation from Nonlinear Observations , 2016, SIAM J. Optim..

[37]  Roman Vershynin,et al.  High-Dimensional Probability , 2018 .