Small width, low distortions: quasi-isometric embeddings with quantized sub-Gaussian random projections

Under which conditions a subset $\mathcal K$ of $\mathbb R^N$ can be embedded in another one of $\delta \mathbb Z^M$ for some resolution $\delta>0$? We address this general question through the specific use of a quantized random linear mapping ${\bf A}:\mathbb R^N \to \delta \mathbb Z^M$ combining a linear projection of $\mathbb R^N$ in $\mathbb R^M$ associated to a random matrix $\boldsymbol \Phi \in \mathbb R^{M\times N}$ with a uniform scalar (dithered) quantization $\mathcal Q$ of $\mathbb R^M$ in $\delta\mathbb Z^M$. The targeted embedding relates the $\ell_2$-distance of any pair of vectors in $\mathcal K$ with the $\ell_1$-distance of their respective mappings in $\delta \mathbb Z^M$, allowing for both multiplicative and additive distortions between these two quantities, i.e., describing a $\ell_2/\ell_1$-quasi-isometric embedding. We show that the sought conditions depend on the Gaussian mean width $w(\mathcal K)$ of the subset $\mathcal K$. In particular, given a symmetric sub-Gaussian distribution $\varphi$ and a precision $\epsilon > 0$, if $M \geq C \epsilon^{-5} w(\mathcal K)^2$ and if the sensing matrix $\boldsymbol \Phi$ has entries i.i.d. as $\varphi$, then, with high probability, the mapping $\bf A$ provides a $\ell_2/\ell_1$-quasi-isometry between $\mathcal K$ and its image in $\delta \mathbb Z^M$. Moreover, in this embedding, the additive distortion is of order $\delta\epsilon$ while the multiplicative one grows with $\epsilon$. For non-Gaussian random $\boldsymbol \Phi$, the multiplicative error is also impacted by the sparsity of the vectors difference, i.e., being smaller for "not too sparse" difference. When $\mathcal K$ is the set of bounded $K$-sparse vectors in any orthonormal basis, then only $M \geq C \epsilon^{-2} \log(c N/K\epsilon^{3/2})$ measurements suffice. Remark: all values $C,c>0$ above only depend on $\delta$ and on the distribution $\varphi$.

[1]  Nicole Immorlica,et al.  Locality-sensitive hashing scheme based on p-stable distributions , 2004, SCG '04.

[2]  Laurent Jacques,et al.  Stabilizing Nonuniformly Quantized Compressed Sensing With Scalar Companders , 2012, IEEE Transactions on Information Theory.

[3]  Laurent Jacques,et al.  A Quantized Johnson–Lindenstrauss Lemma: The Finding of Buffon’s Needle , 2013, IEEE Transactions on Information Theory.

[4]  Laurent Jacques,et al.  Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors , 2011, IEEE Transactions on Information Theory.

[5]  Pablo A. Parrilo,et al.  The Convex Geometry of Linear Inverse Problems , 2010, Foundations of Computational Mathematics.

[6]  Sjoerd Dirksen,et al.  Dimensionality Reduction with Subgaussian Matrices: A Unified Theory , 2014, Foundations of Computational Mathematics.

[7]  Dimitris Achlioptas,et al.  Database-friendly random projections: Johnson-Lindenstrauss with binary coins , 2003, J. Comput. Syst. Sci..

[8]  Michael I. Jordan,et al.  Computational and statistical tradeoffs via convex relaxation , 2012, Proceedings of the National Academy of Sciences.

[9]  Roman Vershynin,et al.  Introduction to the non-asymptotic analysis of random matrices , 2010, Compressed Sensing.

[10]  Alexander M. Powell,et al.  Error Bounds for Consistent Reconstruction: Random Polytopes and Coverage Processes , 2014, Found. Comput. Math..

[11]  Richard G. Baraniuk,et al.  Random Projections of Smooth Manifolds , 2009, Found. Comput. Math..

[12]  Laurent Jacques,et al.  Error Decay of (almost) Consistent Signal Estimations from Quantized Random Gaussian Projections , 2014, ArXiv.

[13]  慧 廣瀬 A Mathematical Introduction to Compressive Sensing , 2015 .

[14]  G. Schechtman Two observations regarding embedding subsets of Euclidean spaces in normed spaces , 2006 .

[15]  Richard G. Baraniuk,et al.  Signal Processing With Compressive Measurements , 2010, IEEE Journal of Selected Topics in Signal Processing.

[16]  Yaniv Plan,et al.  Dimension Reduction by Random Hyperplane Tessellations , 2014, Discret. Comput. Geom..

[17]  Benjamin Recht,et al.  Near-Optimal Bounds for Binary Embeddings of Arbitrary Sets , 2015, ArXiv.

[18]  Stephen P. Boyd,et al.  Compressed Sensing With Quantized Measurements , 2010, IEEE Signal Processing Letters.

[19]  Richard G. Baraniuk,et al.  Democracy in Action: Quantization, Saturation, and Compressive Sensing , 2011 .

[20]  Yaniv Plan,et al.  One‐Bit Compressed Sensing by Linear Programming , 2011, ArXiv.

[21]  Rémi Munos,et al.  Linear regression with random projections , 2012, J. Mach. Learn. Res..

[22]  Richard G. Baraniuk,et al.  A simple proof that random matrices are democratic , 2009, ArXiv.

[23]  David L Donoho,et al.  Compressed sensing , 2006, IEEE Transactions on Information Theory.

[24]  David L. Neuhoff,et al.  Quantization , 2022, IEEE Trans. Inf. Theory.

[25]  Emmanuel J. Candès,et al.  Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? , 2004, IEEE Transactions on Information Theory.

[26]  Christos Thrampoulidis,et al.  Near-optimal sample complexity bounds for circulant binary embedding , 2017, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[27]  S. Mendelson,et al.  Uniform Uncertainty Principle for Bernoulli and Subgaussian Ensembles , 2006, math/0608665.

[28]  Martin Vetterli,et al.  Lower bound on the mean-squared error in oversampled quantization of periodic signals using vector quantization analysis , 1996, IEEE Trans. Inf. Theory.

[29]  Bhiksha Raj,et al.  Robust 1-bit Compressive Sensing via Gradient Support Pursuit , 2013, ArXiv.

[30]  Yaniv Plan,et al.  Robust 1-bit Compressed Sensing and Sparse Logistic Regression: A Convex Programming Approach , 2012, IEEE Transactions on Information Theory.

[31]  Laurent Jacques,et al.  Error Decay of (Almost) Consistent Signal Estimations From Quantized Gaussian Random Projections , 2014, IEEE Transactions on Information Theory.

[32]  Holger Rauhut,et al.  Stable low-rank matrix recovery via null space properties , 2015, ArXiv.

[33]  C. Mortici ON GOSPERS FORMULA FOR THE GAMMA FUNCTION , 2011 .

[34]  Petros Boufounos,et al.  Efficient Coding of Signal Distances Using Universal Quantized Embeddings , 2013, 2013 Data Compression Conference.

[35]  Anthony Vetro,et al.  Quantized embeddings: an efficient and universal nearest neighbor method for cloud-based image retrieval , 2013, Optics & Photonics - Optical Engineering + Applications.

[36]  Jon A. Wellner,et al.  Weak Convergence and Empirical Processes: With Applications to Statistics , 1996 .

[37]  Olgica Milenkovic,et al.  Quantized Compressive Sensing , 2009, 0901.0749.

[38]  R. DeVore,et al.  A Simple Proof of the Restricted Isometry Property for Random Matrices , 2008 .

[39]  P. Gänssler Weak Convergence and Empirical Processes - A. W. van der Vaart; J. A. Wellner. , 1997 .

[40]  Yaniv Plan,et al.  One-bit compressed sensing with non-Gaussian measurements , 2012, ArXiv.

[41]  Laurent Jacques,et al.  Dequantizing Compressed Sensing: When Oversampling and Non-Gaussian Constraints Combine , 2009, IEEE Transactions on Information Theory.

[42]  Vivek K. Goyal,et al.  Quantized Overcomplete Expansions in IRN: Analysis, Synthesis, and Algorithms , 1998, IEEE Trans. Inf. Theory.

[43]  Holger Rauhut,et al.  On the gap between RIP-properties and sparse recovery conditions , 2015, ArXiv.

[44]  P. Diaconis,et al.  Closed Form Summation for Classical Distributions: Variations on Theme of De Moivre , 1991 .

[45]  C. Blyth,et al.  Expected Absolute Error of the Usual Estimator of the Binomial Parameter , 1980 .

[46]  Petros Boufounos,et al.  Universal Rate-Efficient Scalar Quantization , 2010, IEEE Transactions on Information Theory.

[47]  Rachel Ward,et al.  A Unified Framework for Linear Dimensionality Reduction in L1 , 2014, Results in Mathematics.

[48]  W. B. Johnson,et al.  Extensions of Lipschitz mappings into Hilbert space , 1984 .

[49]  Donald E. Knuth,et al.  Big Omicron and big Omega and big Theta , 1976, SIGA.

[50]  Rayan Saab,et al.  One-Bit Compressive Sensing With Norm Estimation , 2014, IEEE Transactions on Information Theory.

[51]  Anupam Gupta,et al.  An elementary proof of the Johnson-Lindenstrauss Lemma , 1999 .

[52]  Dustin G. Mixon,et al.  Compressive classification and the rare eclipse problem , 2014, ArXiv.

[53]  A. Kolmogorov,et al.  Entropy and "-capacity of sets in func-tional spaces , 1961 .

[54]  Olgica Milenkovic,et al.  Distortion-rate functions for quantized compressive sensing , 2009, 2009 IEEE Information Theory Workshop on Networking and Information Theory.