Multilabel Consensus Classification

In the era of big data, a large amount of noisy and incomplete data can be collected from multiple sources for prediction tasks. Combining multiple models or data sources helps to counteract the effects of low data quality and the bias of any single model or data source, and thus can improve the robustness and the performance of predictive models. Out of privacy, storage and bandwidth considerations, in certain circumstances one has to combine the predictions from multiple models or data sources without accessing the raw data. Consensus-based prediction combination algorithms are effective for such situations. However, current research on prediction combination focuses on the single label setting, where an instance can have one and only one label. Nonetheless, data nowadays are usually multilabeled, such that more than one label have to be predicted at the same time. Direct applications of existing prediction combination methods to multilabel settings can lead to degenerated performance. In this paper, we address the challenges of combining predictions from multiple multilabel classifiers and propose two novel algorithms, MLCM-r (MultiLabel Consensus Maximization for ranking) and MLCM-a (MLCM for microAUC). These algorithms can capture label correlations that are common in multilabel classifications, and optimize corresponding performance metrics. Experimental results on popular multilabel classification tasks verify the theoretical analysis and effectiveness of the proposed methods.

[1]  Yang Yu,et al.  Multi-label hypothesis reuse , 2012, KDD.

[2]  Kun Zhang,et al.  Multi-label learning by exploiting label dependency , 2010, KDD.

[3]  J. Hanley,et al.  The meaning and use of the area under a receiver operating characteristic (ROC) curve. , 1982, Radiology.

[4]  Zhiwen Yu,et al.  Transductive multi-label ensemble classification for protein function prediction , 2012, KDD.

[5]  Yizhou Sun,et al.  Graph-based Consensus Maximization among Multiple Supervised and Unsupervised Models , 2009, NIPS.

[6]  Yoram Singer,et al.  BoosTexter: A Boosting-based System for Text Categorization , 2000, Machine Learning.

[7]  Grigorios Tsoumakas,et al.  Random K-labelsets for Multilabel Classification , 2022 .

[8]  Philip S. Yu,et al.  A General Framework for Mining Concept-Drifting Data Streams with Skewed Distributions , 2007, SDM.

[9]  Zhi-Hua Zhou,et al.  Ensemble Methods: Foundations and Algorithms , 2012 .

[10]  Chris H. Q. Ding,et al.  Weighted Consensus Clustering , 2008, SDM.

[11]  Tibério S. Caetano,et al.  Reverse Multi-Label Learning , 2010, NIPS.

[12]  Robert E. Schapire,et al.  How boosting the margin can also boost classifier complexity , 2006, ICML.

[13]  Joydeep Ghosh,et al.  Cluster Ensembles --- A Knowledge Reuse Framework for Combining Multiple Partitions , 2002, J. Mach. Learn. Res..

[14]  Robert E. Schapire,et al.  The Boosting Approach to Machine Learning An Overview , 2003 .

[15]  Chris H. Q. Ding,et al.  Solving Consensus and Semi-supervised Clustering Problems Using Nonnegative Matrix Factorization , 2007, Seventh IEEE International Conference on Data Mining (ICDM 2007).

[16]  Jiawei Han,et al.  Knowledge transfer via multiple model local structure mapping , 2008, KDD.

[17]  Zhong Wang,et al.  Multi-label Classification without the Multi-label Cost , 2010, SDM.

[18]  Arindam Banerjee,et al.  Bayesian cluster ensembles , 2009, Stat. Anal. Data Min..

[19]  Jason Weston,et al.  A kernel method for multi-labelled classification , 2001, NIPS.

[20]  Philip S. Yu,et al.  Multi-label Ensemble Learning , 2011, ECML/PKDD.

[21]  Eyke Hüllermeier,et al.  On label dependence and loss minimization in multi-label classification , 2012, Machine Learning.

[22]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[23]  Yoav Freund,et al.  Boosting the margin: A new explanation for the effectiveness of voting methods , 1997, ICML.

[24]  Geoff Holmes,et al.  Classifier chains for multi-label classification , 2009, Machine Learning.

[25]  Jianping Fan,et al.  Multi-Kernel Multi-Label Learning with Max-Margin Concept Network , 2011, IJCAI.

[26]  Mehryar Mohri,et al.  AUC Optimization vs. Error Rate Minimization , 2003, NIPS.

[27]  Rong Yan,et al.  Model-shared subspace boosting for multi-label classification , 2007, KDD '07.

[28]  Eyke Hüllermeier,et al.  Bayes Optimal Multilabel Classification via Probabilistic Classifier Chains , 2010, ICML.