On the joint source-channel coding error exponent for discrete memoryless systems

We investigate the computation of Csisza/spl acute/r's bounds for the joint source-channel coding (JSCC) error exponent E/sub J/ of a communication system consisting of a discrete memoryless source and a discrete memoryless channel. We provide equivalent expressions for these bounds and derive explicit formulas for the rates where the bounds are attained. These equivalent representations can be readily computed for arbitrary source-channel pairs via Arimoto's algorithm. When the channel's distribution satisfies a symmetry property, the bounds admit closed-form parametric expressions. We then use our results to provide a systematic comparison between the JSCC error exponent E/sub J/ and the tandem coding error exponent E/sub T/, which applies if the source and channel are separately coded. It is shown that E/sub T//spl les/E/sub J//spl les/2E/sub T/. We establish conditions for which E/sub J/>E/sub T/ and for which E/sub J/=2E/sub T/. Numerical examples indicate that E/sub J/ is close to 2E/sub T/ for many source-channel pairs. This gain translates into a power saving larger than 2 dB for a binary source transmitted over additive white Gaussian noise (AWGN) channels and Rayleigh-fading channels with finite output quantization. Finally, we study the computation of the lossy JSCC error exponent under the Hamming distortion measure.

[1]  Kenneth Rose,et al.  Combined source-channel vector quantization using deterministic annealing , 1994, IEEE Trans. Commun..

[2]  Martin E. Hellman,et al.  Convolutional source encoding , 1975, IEEE Trans. Inf. Theory.

[3]  Suguru Arimoto,et al.  On the converse to the coding theorem for discrete memoryless channels (Corresp.) , 1973, IEEE Trans. Inf. Theory.

[4]  Drinsmann Homogenizers,et al.  with A , 1865, Geological Magazine.

[5]  Richard E. Blahut,et al.  Hypothesis testing and information theory , 1974, IEEE Trans. Inf. Theory.

[6]  Rudolf Ahlswede,et al.  Extremal properties of rate distortion functions , 1990, IEEE Trans. Inf. Theory.

[7]  P. Wintz,et al.  Quantizing for Noisy Channels , 1969 .

[8]  Fady Alajaji,et al.  Transmission of nonuniform memoryless sources via nonsystematic turbo codes , 2004, IEEE Transactions on Communications.

[9]  Fady Alajaji,et al.  Detection of binary Markov sources over channels with additive Markov noise , 1996, IEEE Trans. Inf. Theory.

[10]  B. D. Bunday,et al.  Basic optimisation methods , 1985, Mathematical Gazette.

[11]  Neri Merhav,et al.  Guessing Subject to Distortion , 1998, IEEE Trans. Inf. Theory.

[12]  Soft-decision COVQ for Rayleigh-fading channels , 1998, IEEE Communications Letters.

[13]  Fady Alajaji,et al.  On the computation of the joint source-channel error exponent for memoryless system , 2004, International Symposium onInformation Theory, 2004. ISIT 2004. Proceedings..

[14]  Mikael Skoglund,et al.  Hadamard-Based Soft Decoding for Vector Quantization Over Noisy Channels , 1999, IEEE Trans. Inf. Theory.

[15]  J. W. Modestino,et al.  Combined Source-Channel Coding of Images , 1978, IEEE Trans. Commun..

[16]  Fady Alajaji,et al.  Soft-decision demodulation design for COVQ over white, colored, and ISI Gaussian channels , 2000, IEEE Trans. Commun..

[17]  Robert M. Gray,et al.  Sliding-block joint source/noisy-channel coding theorems , 1976, IEEE Trans. Inf. Theory.

[18]  Sergio Verdú,et al.  The source-channel separation theorem revisited , 1995, IEEE Trans. Inf. Theory.

[19]  Giorgio Taricco Letter: On the capacity of the binary input gaussian and rayleigh fading channels , 1996, Eur. Trans. Telecommun..

[20]  Joachim Hagenauer Source-controlled channel decoding , 1995, IEEE Trans. Commun..

[21]  Neri Merhav,et al.  Joint Source-Channel Coding and Guessing with Application to Sequential Decoding , 1998, IEEE Trans. Inf. Theory.

[22]  Shlomo Shamai,et al.  Systematic Lossy Source/Channel Coding , 1998, IEEE Trans. Inf. Theory.

[23]  Bertrand M. Hochwald,et al.  Tradeoff between source and channel coding , 1997, Proceedings of IEEE International Symposium on Information Theory.

[24]  Allen Gersho,et al.  Pseudo-Gray coding , 1990, IEEE Trans. Commun..

[25]  Hiroki Koga,et al.  Information-Spectrum Methods in Information Theory , 2002 .

[26]  Robert M. Gray,et al.  Joint source and noisy channel trellis encoding , 1981, IEEE Trans. Inf. Theory.

[27]  Vladimir B. Balakirsky,et al.  Joint Source-Channel Coding Using Variable-Length Codes , 2001, Probl. Inf. Transm..

[28]  D. A. Bell,et al.  Information Theory and Reliable Communication , 1969 .

[29]  Mikael Skoglund,et al.  Soft Decoding for Vector Quantization Over Noisy Channels with Memory , 1999, IEEE Trans. Inf. Theory.

[30]  Suguru Arimoto Computation of random coding exponent functions , 1976, IEEE Trans. Inf. Theory.

[31]  Michael Gastpar,et al.  To code, or not to code: lossy source-channel communication revisited , 2003, IEEE Trans. Inf. Theory.

[32]  Nariman Farvardin,et al.  A study of vector quantization for noisy channels , 1990, IEEE Trans. Inf. Theory.

[33]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[34]  D. Luenberger Optimization by Vector Space Methods , 1968 .

[35]  V.W.S. Chan,et al.  Principles of Digital Communication and Coding , 1979 .

[36]  Fady Alajaji,et al.  On the Joint Source-Channel Coding Error Exponent for Discrete Memoryless Systems: Computation and Comparison with Separate Coding , 2006, ArXiv.

[37]  Jerry D. Gibson,et al.  Alphabet-constrained data compression , 1982, IEEE Trans. Inf. Theory.

[38]  R. Tyrrell Rockafellar Conjugate Duality and Optimization , 1974 .

[39]  Terrence L. Fine,et al.  Properties of an optimum digital system and applications , 1964, IEEE Trans. Inf. Theory.

[40]  Sang Joon Kim,et al.  A Mathematical Theory of Communication , 2006 .

[41]  Tsachy Weissman,et al.  Universal discrete denoising: known channel , 2003, IEEE Transactions on Information Theory.

[42]  Nariman Farvardin,et al.  Joint design of block source codes and modulation signal sets , 1992, IEEE Trans. Inf. Theory.

[43]  C. L. Chen On a (145, 32) binary cyclic code , 1999, IEEE Trans. Inf. Theory.

[44]  V. N. Koshelev Direct sequential encoding and decoding for discrete sources , 1973, IEEE Trans. Inf. Theory.

[45]  Masao Kasahara,et al.  A construction of vector quantizers for noisy channels , 1984 .

[46]  A. Barg,et al.  Distance distribution of binary codes and the error probability of decoding , 2004, IEEE Transactions on Information Theory.

[47]  Katalin Marton,et al.  Error exponent for source coding with a fidelity criterion , 1974, IEEE Trans. Inf. Theory.

[48]  Imre Csiszár On the error exponent of source-channel transmission with a distortion threshold , 1982, IEEE Trans. Inf. Theory.

[49]  T.E. Fuja,et al.  Channel codes that exploit the residual redundancy in CELP-encoded speech , 1996, IEEE Trans. Speech Audio Process..

[50]  Khalid Sayood,et al.  Use of residual redundancy in the design of joint source/channel coders , 1991, IEEE Trans. Commun..

[51]  A. Banerjee Convex Analysis and Optimization , 2006 .

[52]  David L. Neuhoff,et al.  Joint and tandem source-channel coding with complexity and delay constraints , 2003, IEEE Trans. Commun..

[53]  Fady Alajaji,et al.  Optimistic Shannon coding theorems for arbitrary single-user systems , 1999, IEEE Trans. Inf. Theory.

[54]  Robert M. Gray,et al.  The design of joint source and channel trellis waveform coders , 1987, IEEE Trans. Inf. Theory.

[55]  Khalid Sayood,et al.  Joint source/channel coding for variable length codes , 2000, IEEE Trans. Commun..