Multi-label Learning with Label-Specific Features via Clustering Ensemble

Multi-label learning deals with objects with rich semantics where each example is associated with multiple class labels simultaneously. Intuitively, each class label is supposed to possess specific characteristics of its own. Therefore, exploiting label-specific features serves as one of the promising techniques to learn from multi-label examples. Specifically, the LIFT approach generates the label-specific features by clustering the multi-label training examples in a label-wise style, which ignores the utilization of label correlations to improve generalization performance. In this paper, a new multi-label learning method named LIFTACE (multi-label learning with Label-specIfic FeaTures viA Clustering Ensemble) is proposed, which generates label-specific features by considering label correlations via clustering ensemble techniques. Extensive experimental results show that, LIFTACE can achieve better generalization performance than LIFT by exploiting label correlations in label-specific features generation.

[1]  Jiebo Luo,et al.  Learning multi-label scene classification , 2004, Pattern Recognit..

[2]  Min-Ling Zhang,et al.  A Review on Multi-Label Learning Algorithms , 2014, IEEE Transactions on Knowledge and Data Engineering.

[3]  Joydeep Ghosh,et al.  Cluster Ensembles --- A Knowledge Reuse Framework for Combining Multiple Partitions , 2002, J. Mach. Learn. Res..

[4]  吴信东,et al.  Learning Label Specific Features for Multi-label Classification , 2015 .

[5]  Dejan Gjorgjevikj,et al.  Efficient Two Stage Voting Architecture for Pairwise Multi-label Classification , 2010, Australasian Conference on Artificial Intelligence.

[6]  Geoff Holmes,et al.  Classifier chains for multi-label classification , 2009, Machine Learning.

[7]  Grigorios Tsoumakas,et al.  MULAN: A Java Library for Multi-Label Learning , 2011, J. Mach. Learn. Res..

[8]  Jitendra Malik,et al.  Normalized cuts and image segmentation , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[9]  Charles Elkan,et al.  Learning and Inference in Probabilistic Classifier Chains with Beam Search , 2012, ECML/PKDD.

[10]  Zhi-Hua Zhou,et al.  ML-KNN: A lazy learning approach to multi-label learning , 2007, Pattern Recognit..

[11]  Lei Wu,et al.  Lift: Multi-Label Learning with Label-Specific Features , 2015, IEEE Trans. Pattern Anal. Mach. Intell..

[12]  Zhi-Hua Zhou,et al.  Ensemble Methods: Foundations and Algorithms , 2012 .

[13]  Eyke Hüllermeier,et al.  Multilabel classification via calibrated label ranking , 2008, Machine Learning.

[14]  Yoram Singer,et al.  BoosTexter: A Boosting-based System for Text Categorization , 2000, Machine Learning.

[15]  Grigorios Tsoumakas,et al.  Random K-labelsets for Multilabel Classification , 2022 .

[16]  Anil K. Jain,et al.  Data clustering: a review , 1999, CSUR.

[17]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[18]  Jason Weston,et al.  A kernel method for multi-labelled classification , 2001, NIPS.

[19]  Amanda Clare,et al.  Knowledge Discovery in Multi-label Phenotype Data , 2001, PKDD.

[20]  Janez Demsar,et al.  Statistical Comparisons of Classifiers over Multiple Data Sets , 2006, J. Mach. Learn. Res..

[21]  Eyke Hüllermeier,et al.  Dependent binary relevance models for multi-label classification , 2014, Pattern Recognit..