Joint source-channel coding of a Gaussian mixture source over the Gaussian broadcast channel

Suppose that we want to send a description of a single source to two listeners through a Gaussian broadcast channel, where the channel is used once per source sample. The problem of joint source-channel coding is to design a communication system to minimize the distortion D/sub 1/ at receiver 1 and at the same time minimize the distortion D/sub 2/ at receiver 2. If the source is Gaussian, the optimal solution is well known, and it is achieved by an uncoded "analog" scheme. We consider a Gaussian mixture source. We derive inner and outer bounds for the distortion region of all (D/sub 1/, D/sub 2/) pairs that are simultaneously achievable. The outer bound is based on the entropy power inequality, while the inner bound is attained by a digital-over-analog encoding scheme, which we present. We also show that if the modes of the Gaussian mixture are highly separated, our bounds are tight, and hence, our scheme attains the entire distortion region. This optimal region exceeds the region attained by separating source and channel coding, although it does not contain the "ideal" point (D/sub 1/, D/sub 2/)=(R/sup -1/(C/sub 1/), R/sup -1/(C/sub 2/)).

[1]  Shlomo Shamai,et al.  Systematic Lossy Source/Channel Coding , 1998, IEEE Trans. Inf. Theory.

[2]  Patrick P. Bergmans,et al.  A simple converse for broadcast channels with additive white Gaussian noise (Corresp.) , 1974, IEEE Trans. Inf. Theory.

[3]  James S. Lehnert,et al.  TCM/SSMA communication systems with cascaded sequences and PAM/QAM signal sets , 1998, IEEE Trans. Commun..

[4]  Thomas M. Cover,et al.  Broadcast channels , 1972, IEEE Trans. Inf. Theory.

[5]  Nam C. Phamdo,et al.  Hybrid digital-analog (HDA) joint source-channel codes for broadcasting and robust communications , 2002, IEEE Trans. Inf. Theory.

[6]  Gregory W. Wornell,et al.  Analog error-correcting codes based on chaotic dynamical systems , 1998, IEEE Trans. Commun..

[7]  William Equitz,et al.  Successive refinement of information , 1991, IEEE Trans. Inf. Theory.

[8]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[9]  C. E. SHANNON,et al.  A mathematical theory of communication , 1948, MOCO.

[10]  Thomas J. Goblick,et al.  Theoretical limitations on the transmission of data from analog sources , 1965, IEEE Trans. Inf. Theory.

[11]  Jacob Ziv,et al.  The behavior of analog communication systems , 1970, IEEE Trans. Inf. Theory.

[12]  Toby Berger,et al.  Rate distortion theory : a mathematical basis for data compression , 1971 .

[13]  Udar Mittal,et al.  Joint Source-Channel Codes for Broaadcasting and Robust Communication , 1998 .

[14]  Shlomo Shamai,et al.  Capacity of channels with uncoded side information , 1995, Eur. Trans. Telecommun..

[15]  Ram Zamir,et al.  On the transmission of analog sources over channels with unknown SNR , 2002, Proceedings IEEE International Symposium on Information Theory,.

[16]  Bixio Rimoldi,et al.  Successive refinement of information: characterization of the achievable rates , 1994, IEEE Trans. Inf. Theory.