On the rate loss and construction of source codes for broadcast channels

In this paper, we first define and bound the rate loss of source codes for broadcast channels. Our broadcast channel model comprises one transmitter and two receivers; the transmitter is connected to each receiver by a private channel and to both receivers by a common channel. The transmitter sends a description of source (X, Y) through these channels, receiver 1 reconstructs X with distortion D1, and receiver 2 reconstructs Y with distortion D2. Suppose the rates of the common channel and private channels 1 and 2 are R0, R1, and R2, respectively. The work of Gray and Wyner gives a complete characterization of all achievable rate triples (R0,R1,R2) given any distortion pair (D1,D2). In this paper, we define the rate loss as the gap between the achievable region and the outer bound composed by the rate-distortion functions, i.e., R0+R1+R2 ≥ RX,Y (D1,D2), R0 + R1 ≥ RX(D1), and R0 + R2 ≥ RY (D2). We upper bound the rate loss for general sources by functions of distortions and upper bound the rate loss for Gaussian sources by constants, which implies that though the outer bound is generally not achievable, it may be quite close to the achievable region. This also bounds the gap between the achievable region and the inner bound proposed by Gray and Wyner and bounds the performance penalty associated with using separate decoders rather than joint decoders. We then construct such source codes using entropy-constrained dithered quantizers. The resulting implementation has low complexity and performance close to the theoretical optimum. In particular, the gap between its performance and the theoretical optimum can be bounded from above by constants for Gaussian sources.

[1]  Jacob Ziv,et al.  On universal quantization , 1985, IEEE Trans. Inf. Theory.

[2]  Michelle Effros,et al.  Improved bounds for the rate loss of multiresolution source codes , 2003, IEEE Trans. Inf. Theory.

[3]  Michelle Effros,et al.  Lossless and lossy broadcast system source codes: theoretical limits, optimal design, and empirical performance , 2000, Proceedings DCC 2000. Data Compression Conference.

[4]  Michael Fleming,et al.  Network vector quantization , 2001, IEEE Transactions on Information Theory.

[5]  Qian Zhao,et al.  Broadcast system source codes: a new paradigm for data compression , 1999, Conference Record of the Thirty-Third Asilomar Conference on Signals, Systems, and Computers (Cat. No.CH37020).

[6]  Michelle Effros,et al.  Functional Source Coding for Networks with Receiver Side Information ∗ , 2004 .

[7]  Meir Feder,et al.  On universal quantization by randomized uniform/lattice quantizers , 1992, IEEE Trans. Inf. Theory.

[8]  Hanying Feng,et al.  Network source coding using entropy constrained dithered quantization , 2003, Data Compression Conference, 2003. Proceedings. DCC 2003.

[9]  Meir Feder,et al.  Information rates of pre/post-filtered dithered quantizers , 1993, IEEE Trans. Inf. Theory.

[10]  Toby Berger,et al.  Multiterminal Source Coding with High Resolution , 1999, IEEE Trans. Inf. Theory.

[11]  Ram Zamir,et al.  The rate loss in the Wyner-Ziv problem , 1996, IEEE Trans. Inf. Theory.

[12]  Toby Berger,et al.  All sources are nearly successively refinable , 2001, IEEE Trans. Inf. Theory.

[13]  Robert M. Gray,et al.  Source coding for a simple network , 1974 .

[14]  Michelle Effros,et al.  Multiresolution source coding using entropy constrained dithered scalar quantization , 2004, Data Compression Conference, 2004. Proceedings. DCC 2004.