Compressive Sampling and Lossy Compression

Recent results in compressive sampling have shown that sparse signals can be recovered from a small number of random measurements. This property raises the question of whether random measurements can provide an efficient representation of sparse signals in an information-theoretic sense. Through both theoretical and experimental results, we show that encoding a sparse signal through simple scalar quantization of random measurements incurs a significant penalty relative to direct or adaptive encoding of the sparse signal. Information theory provides alternative quantization strategies, but they come at the cost of much greater estimation complexity.

[1]  Vivek K. Goyal,et al.  Multiple description coding: compression meets the network , 2001, IEEE Signal Process. Mag..

[2]  Robert D. Nowak,et al.  Signal Reconstruction From Noisy Random Projections , 2006, IEEE Transactions on Information Theory.

[3]  D. A. Bell,et al.  Information Theory and Reliable Communication , 1969 .

[4]  David L. Neuhoff,et al.  Quantization , 2022, IEEE Trans. Inf. Theory.

[5]  Emmanuel J. Candès,et al.  Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? , 2004, IEEE Transactions on Information Theory.

[6]  Meir Feder,et al.  On universal quantization by randomized uniform/lattice quantizers , 1992, IEEE Trans. Inf. Theory.

[7]  D. Donoho,et al.  Uncertainty principles and signal recovery , 1989 .

[8]  Terence Tao,et al.  The Dantzig selector: Statistical estimation when P is much larger than n , 2005, math/0506081.

[9]  D. Donoho,et al.  Counting faces of randomly-projected polytopes when the projection radically lowers dimension , 2006, math/0607364.

[10]  Emmanuel J. Candès,et al.  Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information , 2004, IEEE Transactions on Information Theory.

[11]  Harish Viswanathan,et al.  On the whiteness of high-resolution quantization errors , 2000, IEEE Trans. Inf. Theory.

[12]  Jack K. Wolf,et al.  Noiseless coding of correlated information sources , 1973, IEEE Trans. Inf. Theory.

[13]  Martin Vetterli,et al.  Rate-distortion analysis of spike processes , 1999, Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096).

[14]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[15]  PROCEssIng magazInE IEEE Signal Processing Magazine , 2004 .

[16]  Martin J. Wainwright,et al.  Sharp thresholds for high-dimensional and noisy recovery of sparsity , 2006, ArXiv.

[17]  T. Cover,et al.  Rate Distortion Theory , 2001 .

[18]  Jacob Ziv,et al.  On universal quantization , 1985, IEEE Trans. Inf. Theory.

[19]  Vivek K Goyal,et al.  Quantized Frame Expansions with Erasures , 2001 .

[20]  Martin Vetterli,et al.  Data Compression and Harmonic Analysis , 1998, IEEE Trans. Inf. Theory.

[21]  Vivek K. Goyal,et al.  Quantized Overcomplete Expansions in IRN: Analysis, Synthesis, and Algorithms , 1998, IEEE Trans. Inf. Theory.

[22]  Sundeep Rangan,et al.  On the Rate-Distortion Performance of Compressed Sensing , 2007, 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07.

[23]  Aaron D. Wyner,et al.  The rate-distortion function for source coding with side information at the decoder , 1976, IEEE Trans. Inf. Theory.

[24]  N. Meinshausen,et al.  LASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS FOR HIGH-DIMENSIONAL DATA , 2008, 0806.0145.

[25]  Sundeep Rangan,et al.  Recursive consistent estimation with bounded noise , 2001, IEEE Trans. Inf. Theory.

[26]  Vivek K. Goyal,et al.  Theoretical foundations of transform coding , 2001, IEEE Signal Process. Mag..

[27]  Ram Zamir,et al.  The rate loss in the Wyner-Ziv problem , 1996, IEEE Trans. Inf. Theory.

[28]  Ruby J Pai Nonadaptive lossy encoding of sparse signals , 2006 .

[29]  Balas K. Natarajan,et al.  Sparse Approximate Solutions to Linear Systems , 1995, SIAM J. Comput..