Low-delay low-complexity error-correcting codes on sparse graphs
暂无分享,去创建一个
This dissertation presents a systematic exposition on finite-block-length coding theory and practice. We begin with the task of identifying the maximum achievable rates over noisy, finite-block-length constrained channels, referred to as (e, n)-capacity Cen, with e denoting target block-error probability and n the block length. We characterize how the optimum codes look like over finite-block-length constrained channels. Constructing good, short-block-length error-correcting codes defined on sparse graphs is the focus of the thesis. We propose a new, general method for constructing Tanner graphs having a large girth by progressively establishing edges or connections between symbol and check nodes in an edge-by-edge manner, called progressive edge-growth (PEG) construction. Lower bounds on the girth of PEG Tanner graphs and on the minimum distance of the resulting low-density parity-check (LDPC) codes are derived in terms of parameters of the graphs. The PEG construction attains essentially the same girth as Gallager's explicit construction for regular graphs, both of which meet or exceed an extension of the Erdos-Sachs bound. The PEG construction proves to be powerful for generating good, short-block-length binary LDPC codes. Furthermore, we show that the binary interpretation of GF(2b) codes on the cycle Tanner graph TG(2, dc), if b grows sufficiently large, can be used over the binary-input additive white Gaussian noise (AWGN) channel as "good code for optimum decoding" and "good code for iterative decoding". Codes on sparse graphs are often decoded iteratively by a sum-product algorithm (SPA) with low complexity. We investigate efficient digital implementations of the SPA for decoding binary LDPC codes from both the architectural and algorithmic point of view, and describe new reduced-complexity derivatives thereof. The unified treatment of decoding techniques for LDPC codes provides flexibility in selecting the appropriate design point in high-speed applications from a performance, latency, and computational complexity perspective.