Multi-label Classication with Error-correcting Codes

We formulate a framework for applying error-correcting codes (ECC) on multi-label classication problems. The framework treats some base learners as noisy channels and uses ECC to correct the prediction errors made by the learners. An immediate use of the framework is a novel ECC-based explanation of the popular random k-label-sets (RAKEL) algorithm using a simple repetition ECC. Using the framework, we empirically compare a broad spectrum of ECC designs for multi-label classication. The results not only demonstrate that RAKEL can be improved by applying some stronger ECC, but also show that the traditional Binary Relevance approach can be enhanced by learning more parity-checking labels. In addition, our study on dierent ECC helps understand the trade-o between the strength of ECC and the hardness of the base learning tasks.