On the construction of some capacity-approaching coding schemes

This thesis proposes two constructive methods of approaching the Shannon limit very closely. Interestingly, these two methods operate in opposite regions, one has a block length of one and the other has a block length approaching infinity. The first approach is based on novel memoryless joint source-channel coding schemes. We first show some examples of sources and channels where no coding is optimal for all values of the signal-to-noise ratio (SNR). When the source bandwidth is greater than the channel bandwidth, joint coding schemes based on space-filling curves and other families of curves are proposed. For uniform sources and modulo channels, our coding scheme based on space-filling curves operates within 1.1 dB of Shannon's rate-distortion bound. For Gaussian sources and additive white Gaussian noise (AWGN) channels, we can achieve within 0.9 dB of the rate-distortion bound. The second scheme is based on low-density parity-check (LDPC) codes. We first demonstrate that we can translate threshold values of an LDPC code between channels accurately using a simple mapping. We develop some models for density evolution from this observation, namely erasure-channel, Gaussian-capacity, and reciprocal-channel approximations. The reciprocal-channel approximation, based on dualizing LDPC codes, provides a very accurate model of density evolution for the AWGN channel. We also develop another approximation method, Gaussian approximation, which enables us to visualize infinite-dimensional density evolution and optimization of LDPC codes. We also develop other tools to better understand density evolution. Using these tools, we design some LDPC codes that approach the Shannon limit extremely closely. For multilevel AWGN channels, we design a rate 1/2 code that has a threshold within 0.0063 dB of the Shannon limit of the noisiest level. For binary-input AWGN channels, our best rate 1/2 LDPC code has a threshold within 0.0045 dB of the Shannon limit. Simulation results show that we can achieve within 0.04 dB of the Shannon limit at a bit error rate of 10−6 using a block length of 10 7. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

[1]  Hideki Imai,et al.  Correction to 'A New Multilevel Coding Method Using Error-Correcting Codes' , 1977, IEEE Trans. Inf. Theory.

[2]  R. Durbin,et al.  Optimal numberings of an N N array , 1986 .

[3]  R. J. Pilc The transmission distortion of a source as a function of the encoding block length , 1968 .

[4]  A. Glavieux,et al.  Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1 , 1993, Proceedings of ICC '93 - IEEE International Conference on Communications.

[5]  D. Hilbert Über die stetige Abbildung einer Linie auf ein Flächenstück , 1935 .

[6]  X. Jin Factor graphs and the Sum-Product Algorithm , 2002 .

[7]  Judea Pearl,et al.  Probabilistic reasoning in intelligent systems - networks of plausible inference , 1991, Morgan Kaufmann series in representation and reasoning.

[8]  G. David Forney,et al.  Trellis precoding: Combined coding, precoding and shaping for intersymbol interference channels , 1992, IEEE Trans. Inf. Theory.

[9]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[10]  A. Perez,et al.  Peano scanning of arbitrary size images , 1992, Proceedings., 11th IAPR International Conference on Pattern Recognition. Vol. III. Conference C: Image, Speech and Signal Analysis,.

[11]  Radford M. Neal,et al.  Near Shannon limit performance of low density parity check codes , 1996 .

[12]  Niclas Wiberg,et al.  Codes and Decoding on General Graphs , 1996 .

[13]  Daniel A. Spielman,et al.  Analysis of low density codes and improved designs using irregular graphs , 1998, STOC '98.

[14]  Daniel A. Spielman,et al.  Practical loss-resilient codes , 1997, STOC '97.

[15]  Thomas J. Goblick,et al.  Theoretical limitations on the transmission of data from analog sources , 1965, IEEE Trans. Inf. Theory.

[16]  Philippe Godlewski,et al.  Replication decoding , 1979, IEEE Trans. Inf. Theory.

[17]  John N. Tsitsiklis,et al.  Introduction to linear optimization , 1997, Athena scientific optimization and computation series.

[18]  Vinay A. Vaishampayan Combined Source-Channel Coding for Bandlimited Waveform Channels , 1989 .

[19]  R. Gallager Information Theory and Reliable Communication , 1968 .

[20]  Stan Wagon Mathematica in action , 1991 .

[21]  Jacob Ziv,et al.  The behavior of analog communication systems , 1970, IEEE Trans. Inf. Theory.

[22]  Johannes B. Huber,et al.  Power and bandwidth efficient digital communication using turbo codes in multilevel codes , 1995, Eur. Trans. Telecommun..

[23]  NARIMAN FARVARDIN,et al.  Optimal quantizer design for noisy channels: An approach to combined source - channel coding , 1987, IEEE Trans. Inf. Theory.

[24]  Kyong-Hwa Lee Optimal linear coding for a multichannel system , 1975 .

[25]  Philippe Piret,et al.  Algebraic constructions of Shannon codes for regular channels , 1982, IEEE Trans. Inf. Theory.

[26]  Allen Gersho,et al.  Vector quantization and signal compression , 1991, The Kluwer international series in engineering and computer science.

[27]  Stephan ten Brink Iterative Decoding Trajectories of Parallel Concatenated Codes , 1999 .

[28]  I. S. Gradshteyn,et al.  Table of Integrals, Series, and Products , 1976 .

[29]  Robert Michael Tanner,et al.  A recursive approach to low complexity codes , 1981, IEEE Trans. Inf. Theory.

[30]  D. Voorhies SPACE-FILLING CURVES AND A MEASURE OF COHERENCE , 1991 .

[31]  D. Spielman,et al.  Expander codes , 1996 .

[32]  T. Richardson,et al.  Thresholds for turbo codes , 2000, 2000 IEEE International Symposium on Information Theory (Cat. No.00CH37060).

[33]  Carlos R. P. Hartmann,et al.  An optimum symbol-by-symbol decoding rule for linear codes , 1976, IEEE Trans. Inf. Theory.

[34]  R.J. McEliece,et al.  Iterative decoding on graphs with a single cycle , 1998, Proceedings. 1998 IEEE International Symposium on Information Theory (Cat. No.98CH36252).

[35]  Hideki Imai,et al.  A new multilevel coding method using error-correcting codes , 1977, IEEE Trans. Inf. Theory.

[36]  Rüdiger L. Urbanke,et al.  The capacity of low-density parity-check codes under message-passing decoding , 2001, IEEE Trans. Inf. Theory.

[37]  Sae-Young Chung,et al.  On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit , 2001, IEEE Communications Letters.

[38]  Robert G. Gallager,et al.  Low-density parity-check codes , 1962, IRE Trans. Inf. Theory.

[39]  Tor A. Ramstad,et al.  Bandwidth compression for continuous amplitude channels based on vector approximation to a continuous subset of the source signal space , 1997, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[40]  H. Sagan Space-filling curves , 1994 .

[41]  G.D. Forney,et al.  Codes on graphs: Normal realizations , 2000, IEEE Trans. Inf. Theory.

[42]  Benjamin Van Roy,et al.  An Analysis of Turbo Decoding with Gaussian Densities , 1999, NIPS.

[43]  Sae-Young Chung,et al.  Gaussian approximation for sum-product decoding of low-density parity-check codes , 2000, 2000 IEEE International Symposium on Information Theory (Cat. No.00CH37060).

[44]  Frederick Jelinek,et al.  Probabilistic Information Theory: Discrete and Memoryless Models , 1968 .

[45]  A. Wyner,et al.  On communication of analog data from a bounded source space , 1969 .

[46]  Sae-Young Chung,et al.  Sphere-bound-achieving coset codes and multilevel coset codes , 2000, IEEE Trans. Inf. Theory.

[47]  Claude-Henri Lamarque,et al.  Image analysis using space-filling curves and 1D wavelet bases , 1996, Pattern Recognit..

[48]  D. Hilbert Ueber die stetige Abbildung einer Line auf ein Flächenstück , 1891 .

[49]  I. M. Jacobs,et al.  Principles of Communication Engineering , 1965 .

[50]  Theodore Bially,et al.  Space-filling curves: Their generation and their application to bandwidth reduction , 1969, IEEE Trans. Inf. Theory.

[51]  Joachim Hagenauer,et al.  Iterative decoding of binary block and convolutional codes , 1996, IEEE Trans. Inf. Theory.

[52]  Kyong-Hwa Lee,et al.  Optimal Linear Coding for Vector Channels , 1976, IEEE Trans. Commun..

[53]  Andrew J. Viterbi,et al.  Principles of Digital Communication and Coding , 1979 .

[54]  Abraham Lempel,et al.  Compression of two-dimensional data , 1986, IEEE Trans. Inf. Theory.

[55]  Hans-Andrea Loeliger,et al.  Codes and iterative decoding on general graphs , 1995, Eur. Trans. Telecommun..

[56]  Sae-Young Chung,et al.  Analysis of sum-product decoding of low-density parity-check codes using a Gaussian approximation , 2001, IEEE Trans. Inf. Theory.

[57]  Sae-Young Chung,et al.  On the capacity of coset codes and multilevel coset codes , 1996 .

[58]  Michael Lindenbaum,et al.  Euclidean Voronoi labelling on the multidimensional grid , 1995, Pattern Recognit. Lett..

[59]  G. Peano Sur une courbe, qui remplit toute une aire plane , 1890 .

[60]  Sae-Young Chung,et al.  Joint source-channel coding using space-filling curves for bandwidth compression , 1998, Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225).

[61]  C.E. Shannon,et al.  Communication in the Presence of Noise , 1949, Proceedings of the IRE.

[62]  E. Anderson,et al.  Linear programming in infinite-dimensional spaces : theory and applications , 1987 .

[63]  Michael Lindenbaum,et al.  On the metric properties of discrete space-filling curves , 1996, IEEE Trans. Image Process..

[64]  Hesham El Gamal,et al.  Analyzing the turbo decoder using the Gaussian approximation , 2001, IEEE Trans. Inf. Theory.

[65]  I. Good,et al.  Fractals: Form, Chance and Dimension , 1978 .

[66]  Yair Weiss,et al.  Correctness of Local Probability Propagation in Graphical Models with Loops , 2000, Neural Computation.

[67]  M. Aminshokrollahi New sequences of linear time erasure codes approaching the channel capacity , 1999 .

[68]  William T. Freeman,et al.  Correctness of Belief Propagation in Gaussian Graphical Models of Arbitrary Topology , 1999, Neural Computation.