Confidence Weighted Multitask Learning

Traditional online multitask learning only utilizes the firstorder information of the datastream. To remedy this issue, we propose a confidence weighted multitask learning algorithm, which maintains a Gaussian distribution over each task model to guide online learning process. The mean (covariance) of the Gaussian Distribution is a sum of a local component and a global component that is shared among all the tasks. In addition, this paper also addresses the challenge of active learning on the online multitask setting. Instead of requiring labels of all the instances, the proposed algorithm determines whether the learner should acquire a label by considering the confidence from its related tasks over label prediction. Theoretical results show the regret bounds can be significantly reduced. Empirical results demonstrate that the proposed algorithm is able to achieve promising learning efficacy, while simultaneously minimizing the labeling cost.

[1]  Dit-Yan Yeung,et al.  A Convex Formulation for Learning Task Relationships in Multi-Task Learning , 2010, UAI.

[2]  Peter L. Bartlett,et al.  Multitask Learning with Expert Advice , 2007, COLT.

[3]  Peng Yang,et al.  A Min-Max Optimization Framework For Online Graph Classification , 2015, CIKM.

[4]  Peng Yang,et al.  Robust Online Multi-Task Learning with Correlative and Personalized Structures , 2017, IEEE Transactions on Knowledge and Data Engineering.

[5]  Ramesh C. Jain,et al.  Collaborative online learning of user generated content , 2011, CIKM '11.

[6]  Jaime G. Carbonell,et al.  Active Learning from Peers , 2017, NIPS.

[7]  Avishek Saha,et al.  Online Learning of Multiple Tasks and Their Relationships , 2011, AISTATS.

[8]  P. Tseng Convergence of a Block Coordinate Descent Method for Nondifferentiable Minimization , 2001 .

[9]  Claudio Gentile,et al.  Worst-Case Analysis of Selective Sampling for Linear Classification , 2006, J. Mach. Learn. Res..

[10]  Massimiliano Pontil,et al.  Convex multi-task feature learning , 2008, Machine Learning.

[11]  Peter L. Bartlett,et al.  Matrix regularization techniques for online multitask learning , 2008 .

[12]  Massimiliano Pontil,et al.  Multi-Task Feature Learning , 2006, NIPS.

[13]  Ramesh C. Jain,et al.  Collaborative Online Multitask Learning , 2014, IEEE Transactions on Knowledge and Data Engineering.

[14]  Koby Crammer,et al.  Learning Multiple Tasks using Shared Hypotheses , 2012, NIPS.

[15]  Steven C. H. Hoi,et al.  Online Learning: A Comprehensive Survey , 2018, Neurocomputing.

[16]  Rong Jin,et al.  Online AUC Maximization , 2011, ICML.

[17]  Peng Yang,et al.  Bandit Online Learning on Graphs via Adaptive Optimization , 2018, IJCAI.

[18]  Massimiliano Pontil,et al.  Regularized multi--task learning , 2004, KDD.

[19]  Philip M. Long,et al.  Online Learning of Multiple Tasks with a Shared Loss , 2007, J. Mach. Learn. Res..

[20]  Claudio Gentile,et al.  Selective sampling and active learning from single and multiple teachers , 2012, J. Mach. Learn. Res..

[21]  Yong Liu,et al.  Randomized Kernel Selection With Spectra of Multilevel Circulant Matrices , 2018, AAAI.

[22]  Yiming Yang,et al.  Adaptive Smoothed Online Multi-Task Learning , 2016, NIPS.

[23]  J. Sherman,et al.  Adjustment of an Inverse Matrix Corresponding to a Change in One Element of a Given Matrix , 1950 .

[24]  Koby Crammer,et al.  Exact Convex Confidence-Weighted Learning , 2008, NIPS.

[25]  Tianbao Yang,et al.  Online Asymmetric Active Learning with Imbalanced Data , 2016, KDD.

[26]  Gábor Lugosi,et al.  Online Multi-task Learning with Hard Constraints , 2009, COLT.

[27]  Peng Yang,et al.  An Aggressive Graph-Based Selective Sampling Algorithm for Classification , 2015, 2015 IEEE International Conference on Data Mining.