Complexity-Optimized Low-Density Parity-Check Codes

Using a numerical approach, tradeoffs between code rate and decoding complexity are studied for long-block-length irregular low-density parity-check codes decoded using the sum-product algorithm under the usual parallel-update messagepassing schedule. The channel is an additive white Gaussian noise channel and the modulation format is binary antipodal signalling, although the methodology can be extended to any other channels for which a density evolution analysis may be carried out. A measure is introduced that incorporates two factors that contribute to the decoding complexity. One factor, which scales linearly with the number of edges in the code’s factor graph, measures the number of operations required to carry out a single decoding iteration. The other factor is an estimate of the number of iterations required to reduce the the bit-error probability from that given by the channel to a desired target. The decoding complexity measure is obtained from a density-evolution analysis of the code, which is used to relate decoding complexity with the code’s degree distribution and code rate. One natural optimization problem that arises in this context is to maximize code rate for a given channel subject to a constraint on decoding complexity. At one extreme (no constraint on decoding complexity) one obtains the “threshold-optimized” LDPC codes that have been the focus of much attention in recent years. Such codes themselves represent one possible means of trading decoding complexity for rate, as such codes can be applied in channels better than the one for which they are designed, achieving the benefit of a reduced decoding complexity. However, it is found that the codes optimized using the methods described in this paper provide a better tradeoff, often achieving the same code rate with approximately 1/3 the decoding complexity of the threshold-optimized codes.

[1]  Rüdiger L. Urbanke,et al.  Complexity versus performance of capacity-achieving irregular repeat-accumulate codes on the binary erasure channel , 2004, IEEE Transactions on Information Theory.

[2]  Stephan ten Brink,et al.  Extrinsic information transfer functions: A model and two properties , 2002 .

[3]  Robert G. Gallager,et al.  Low-density parity-check codes , 1962, IRE Trans. Inf. Theory.

[4]  Amin Shokrollahi,et al.  New Sequences of Linear Time Erasure Codes Approaching the Channel Capacity , 1999, AAECC.

[5]  Daniel A. Spielman,et al.  Improved low-density parity-check codes using irregular graphs and belief propagation , 1998, Proceedings. 1998 IEEE International Symposium on Information Theory (Cat. No.98CH36252).

[6]  Wei Yu,et al.  Complexity-optimized low-density parity-check codes for gallager decoding algorithm B , 2005, Proceedings. International Symposium on Information Theory, 2005. ISIT 2005..

[7]  Rüdiger L. Urbanke,et al.  The capacity of low-density parity-check codes under message-passing decoding , 2001, IEEE Trans. Inf. Theory.

[8]  Stephan ten Brink,et al.  Convergence behavior of iteratively decoded parallel concatenated codes , 2001, IEEE Trans. Commun..

[9]  Masoud Ardakani,et al.  A more accurate one-dimensional analysis and design of irregular LDPC codes , 2004, IEEE Transactions on Communications.

[10]  Masoud Ardakani,et al.  EXIT-chart properties of the highest-rate LDPC code with desired convergence behavior , 2005, IEEE Communications Letters.

[11]  Sae-Young Chung,et al.  On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit , 2001, IEEE Communications Letters.

[12]  Robert J. McEliece,et al.  On the complexity of reliable communication on the erasure channel , 2001, Proceedings. 2001 IEEE International Symposium on Information Theory (IEEE Cat. No.01CH37252).