Using Many Machines to Handle an Enormous Error-Correcting Code

We investigate the problem of using many machines to represent, encode and decode an error-correcting code with an extremely large block length. Standard algorithms for encoding and decoding run into problems when scaled to a block length that does not allow random access to the data. We apply the massive computing infrastructure at Google together with the MapReduce programming abstraction to encode and decode a Tornado code over the erasure channel.