Rate-Distortion Theory of Distributed Compressed Sensing

In this chapter, correlated and distributed sources without cooperation at the encoder are considered. For these sources, the best achievable performance in the rate-distortion sense of any distributed compressed sensing scheme is derived, under the constraint of high-rate quantization. Moreover, under this model we derive a closed-form expression of the rate gain achieved by taking into account the correlation of the sources at the receiver and a closed-form expression of the average performance of the oracle receiver for independent and joint reconstruction. Finally, we show experimentally that the exploitation of the correlation between the sources performs close to optimal and that the only penalty is due to the missing knowledge of the sparsity support as in (non-distributed) compressed sensing. Even if the derivation is performed in the large system regime, where signal and system parameters tend to infinity, numerical results show that the equations match simulations for parameter values of practical interest.

[1]  Thomas M. Cover,et al.  A Proof of the Data Compression Theorem of Slepian and Wolf for Ergodic Sources , 1971 .

[2]  Richard G. Baraniuk,et al.  Measurement Bounds for Sparse Signal Ensembles via Graphical Models , 2011, IEEE Transactions on Information Theory.

[3]  E. Candès,et al.  Stable signal recovery from incomplete and inaccurate measurements , 2005, math/0503066.

[4]  Zixiang Xiong,et al.  Slepian-Wolf Coded Nested Lattice Quantization for Wyner-Ziv Coding: High-Rate Performance Analysis and Code Design , 2006, IEEE Transactions on Information Theory.

[5]  Xiaoyu Liu Weights Modulo a Prime Power in Divisible Codes and a Related Bound , 2006, IEEE Transactions on Information Theory.

[6]  Bernd Girod,et al.  High-rate quantization and transform coding with side information at the decoder , 2006, Signal Process..

[7]  V.K. Goyal,et al.  Compressive Sampling and Lossy Compression , 2008, IEEE Signal Processing Magazine.

[8]  Enrico Magli,et al.  Operational Rate-Distortion Performance of Single-Source and Distributed Compressed Sensing , 2014, IEEE Transactions on Communications.

[9]  Gitta Kutyniok,et al.  1 . 2 Sparsity : A Reasonable Assumption ? , 2012 .

[10]  Aaron D. Wyner,et al.  The rate-distortion function for source coding with side information at the decoder , 1976, IEEE Trans. Inf. Theory.

[11]  Enrico Magli,et al.  Exact performance analysis of the oracle receiver for compressed sensing reconstruction , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[12]  Richard G. Baraniuk,et al.  The Pros and Cons of Compressive Sensing for Wideband Signal Acquisition: Noise Folding versus Dynamic Range , 2011, IEEE Transactions on Signal Processing.

[13]  Richard G. Baraniuk,et al.  Distributed Compressive Sensing , 2009, ArXiv.

[14]  Richard G. Baraniuk,et al.  Regime Change: Bit-Depth Versus Measurement-Rate in Compressive Sensing , 2011, IEEE Transactions on Signal Processing.

[15]  José A. Díaz-García,et al.  Distribution of the generalised inverse of a random matrix and its applications , 2006 .

[16]  Enrico Magli,et al.  Lossy compression of distributed sparse sources: A practical scheme , 2011, 2011 19th European Signal Processing Conference.

[17]  Sundeep Rangan,et al.  On the Rate-Distortion Performance of Compressed Sensing , 2007, 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07.

[18]  Martin Vetterli,et al.  Rate Distortion Behavior of Sparse Sources , 2012, IEEE Transactions on Information Theory.

[19]  E. Candès The restricted isometry property and its implications for compressed sensing , 2008 .

[20]  Michel Kieffer,et al.  Wyner-Ziv coding with uncertain side information quality , 2010, 2010 18th European Signal Processing Conference.

[21]  Olgica Milenkovic,et al.  Information Theoretical and Algorithmic Approaches to Quantized Compressive Sensing , 2011, IEEE Transactions on Communications.

[22]  R. Cook,et al.  On the mean and variance of the generalized inverse of a singular Wishart matrix , 2011 .

[23]  Jack K. Wolf,et al.  Noiseless coding of correlated information sources , 1973, IEEE Trans. Inf. Theory.