A novel learning approach to multiple tasks based on boosting methodology

Boosting has become one of the state-of-the-art techniques in many supervised learning and semi-supervised learning applications. In this paper, we develop a novel boosting algorithm, MTBoost, for multi-task learning problem. Many previous multi-task learning algorithms can only solve the problem in low or moderate dimensional space. However, the MTBoost algorithm is capable of working for very high dimensional data such as in text mining where the feature number is beyond several 10,000. The experimental results illustrate that the MTBoost algorithm provides significantly better classification performance than supervised single task learning algorithms. Moreover, MTBoost outperforms some other typical multi-task learning methods.

[1]  Charles A. Micchelli,et al.  Kernels for Multi--task Learning , 2004, NIPS.

[2]  Antonio Torralba,et al.  Sharing features: efficient boosting procedures for multiclass object detection , 2004, CVPR 2004.

[3]  Shai Ben-David,et al.  Exploiting Task Relatedness for Mulitple Task Learning , 2003, COLT.

[4]  Antonio Torralba,et al.  Sharing Visual Features for Multiclass and Multiview Object Detection , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  Lawrence Carin,et al.  Multi-Task Learning for Classification with Dirichlet Process Priors , 2007, J. Mach. Learn. Res..

[6]  Jianping Fan,et al.  Integrating Concept Ontology and Multitask Learning to Achieve More Effective Classifier Training for Multilevel Image Annotation , 2008, IEEE Transactions on Image Processing.

[7]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[8]  Kristin P. Bennett,et al.  Constructing Orthogonal Latent Features for Arbitrary Loss , 2006, Feature Extraction.

[9]  Yiming Yang,et al.  Learning Multiple Related Tasks using Latent Independent Component Analysis , 2005, NIPS.

[10]  J. Friedman Special Invited Paper-Additive logistic regression: A statistical view of boosting , 2000 .

[11]  Yoram Singer,et al.  Improved Boosting Algorithms Using Confidence-rated Predictions , 1998, COLT' 98.

[12]  Massimiliano Pontil,et al.  Regularized multi--task learning , 2004, KDD.

[13]  Qiang Yang,et al.  Boosting for transfer learning , 2007, ICML '07.

[14]  Sebastian Thrun,et al.  Is Learning The n-th Thing Any Easier Than Learning The First? , 1995, NIPS.

[15]  Massimiliano Pontil,et al.  Convex multi-task feature learning , 2008, Machine Learning.

[16]  Rich Caruana,et al.  Multitask Learning , 1997, Machine Learning.

[17]  Jonathan Baxter,et al.  A Model of Inductive Bias Learning , 2000, J. Artif. Intell. Res..

[18]  Massimiliano Pontil,et al.  Multi-Task Feature Learning , 2006, NIPS.

[19]  Charles A. Micchelli,et al.  Learning Multiple Tasks with Kernel Methods , 2005, J. Mach. Learn. Res..

[20]  Andreas Maurer,et al.  Bounds for Linear Multi-Task Learning , 2006, J. Mach. Learn. Res..

[21]  Yoram Singer,et al.  BoosTexter: A Boosting-based System for Text Categorization , 2000, Machine Learning.

[22]  Jianping Fan,et al.  Building concept ontology for medical video annotation , 2006, MM '06.

[23]  David B. Dunson,et al.  The matrix stick-breaking process for flexible multi-task learning , 2007, ICML '07.

[24]  Christopher M. Bishop,et al.  Neural networks for pattern recognition , 1995 .

[25]  Tong Zhang,et al.  A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data , 2005, J. Mach. Learn. Res..