PERFORMANCE AND COMPLEXITY COMPARISON OF LOW DENSITY PARITY CHECK CODES AND TURBO CODES

The last decade has seen a step change in the area of error correction coding for digital communication. Whilst it was always generally accepted that codes exists which will get close to the capacity limits predicted by Shannon, it was not until Berrou presented a series of new results at the ICC in 1993 [3] that real evidence was published that demonstrated this closeness. Moreover, Shannon’s predictions for optimal codes would imply random-like codes, intuitively implying that the decoding operation on these codes would be prohibitively complex. However, the ingenuity of Berrou’s coding scheme, dubbed Turbo codes, was that a very complex overall code could be constructed by combining two or more simple codes. These codes could then be decoded separately, whilst exchanging probabilistic, or uncertainty, information about the quality of the decoding of each bit to each other. This implied that complex codes had now become practical. This discovery triggered a series of new, focused research programmes, and prominent researchers devoted their time to this new area. 19996 saw a new breakthrough. Leading on from the work from Turbo codes, MacKay at the University of Cambridge revisited some 35 year old work originally undertaken by Gallagher [5], who had constructed a class of codes dubbed Low Density Parity Check (LDPC) codes. Building on the increased understanding on iterative decoding and probability propagation on graphs that led on from the work on Turbo codes, MacKay could now show that Low Density Parity Check (LDPC) codes could be decoded in a similar manner to Turbo codes, and may actually be able to beat the Turbo codes [6]. As a review, this paper will consider both these classes of codes, and compare the performance and the complexity of these codes. A description of both classes of codes will be given.