CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding
暂无分享,去创建一个
This lecture covers our final subtopic within the “exact and approximate recovery” part of the course. The problem we study concerns error-correcting codes, which solve the problem of encoding information to be robust to errors (i.e., bit flips). We consider only binary codes, so the objects of study are n-bit vectors. Recall that a code is simply a subset of {0, 1}; elements of this subset are codewords. Recall that the Hamming distance dH(x, y) between two vectors is the number of coordinates in which they differ, and that the distance of a code is the minimum Hamming distance between two different codewords. For example, consider the code {x ∈ {0, 1} : x has even parity}, where the codewords are vectors that have an even number of 1s. One way to think about this code is as the set of all (n − 1)-bit vectors, with an extra “parity bit” appended at the end. The distance of the code is exactly 2. If an odd number of bit flips is suffered during transmission, the error can be detected, and the receiver can request a retransmission from the sender. If an even number of bit flips occurs, then the receiver gets a codeword different from the one intended by the sender. For a code with distance d, up to d− 1 adversarial errors can be detected. If the number of errors is less than d/2, then no retransmission is required: there is a unique codeword closest (in Hamming distance) to the transmission, and it is the message originally sent by the receiver. That is, the corrupted transmission can be decoded by the receiver. This lecture studies the computational problem of decoding a corrupted codeword, and conditions under which the problem can be solved efficiently using linear programming. This approach works well for a useful family of codes, described next.
[1] D. Spielman,et al. Expander codes , 1996 .
[2] Michael Viderman,et al. LP decoding of expander codes: a simpler proof , 2012, ArXiv.
[3] Martin J. Wainwright,et al. LP Decoding Corrects a Constant Fraction of Errors , 2004, IEEE Transactions on Information Theory.