Collaborating between Local and Global Learning for Distributed Online Multiple Tasks

This paper studies the novel learning scenarios of Distributed Online Multi-tasks (DOM), where the learning individuals with continuously arriving data are distributed separately and meanwhile they need to learn individual models collaboratively. It has three characteristics: distributed learning, online learning and multi-task learning. It is motivated by the emerging applications of wearable devices, which aim to provide intelligent monitoring services, such as health emergency alarming and movement recognition. To the best of our knowledge, no previous work has been done for this kind of problems. Thus, in this paper a collaborative learning scheme is proposed for this problem. Specifically, it performs local learning and global learning alternately. First, each client performs online learning using the increasing data locally. Then, DOM switches to global learning on the server side when some condition is triggered by clients. Here, an asynchronous online multi-task learning method is proposed for global learning. In this step, only this client's model, which triggers the global learning, is updated with the support of the difficult local data instances and the other clients' models. The experiments from 4 applications show that the proposed method of global learning can improve local learning significantly. DOM framework is effective, since it can share knowledge among distributed tasks and obtain better models than learning them separately. It is also communication efficient, which only requires the clients send a small portion of raw data to the server.

[1]  Fuzhen Zhuang,et al.  Shared Structure Learning for Multiple Tasks with Multiple Views , 2013, ECML/PKDD.

[2]  Rong Jin,et al.  Double Updating Online Learning , 2011, J. Mach. Learn. Res..

[3]  Steven C. H. Hoi,et al.  PAMR: Passive aggressive mean reversion strategy for portfolio selection , 2012, Machine Learning.

[4]  Rich Caruana,et al.  Multitask Learning , 1998, Encyclopedia of Machine Learning and Data Mining.

[5]  Guido Sanguinetti,et al.  Bayesian Multitask Classification With Gaussian Process Priors , 2011, IEEE Transactions on Neural Networks.

[6]  Pietro Perona,et al.  A Bayesian hierarchical model for learning natural scene categories , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[7]  Dit-Yan Yeung,et al.  Multi-Task Learning in Heterogeneous Feature Spaces , 2011, AAAI.

[8]  Koby Crammer,et al.  Confidence-weighted linear classification , 2008, ICML '08.

[9]  Eric Eaton,et al.  Online Multi-Task Learning via Sparse Dictionary Optimization , 2014, AAAI.

[10]  Claudio Gentile,et al.  A Second-Order Perceptron Algorithm , 2002, SIAM J. Comput..

[11]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[12]  Koby Crammer,et al.  Online Methods for Multi-Domain Learning and Adaptation , 2008, EMNLP.

[13]  Jieping Ye,et al.  A convex formulation for learning shared structures from multiple tasks , 2009, ICML '09.

[14]  Claudio Gentile,et al.  Linear Algorithms for Online Multitask Classification , 2010, COLT.

[15]  Hui Xiong,et al.  Multi-task Multi-view Learning for Heterogeneous Tasks , 2014, CIKM.

[16]  Tong Zhang,et al.  A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data , 2005, J. Mach. Learn. Res..

[17]  Avishek Saha,et al.  Online Learning of Multiple Tasks and Their Relationships , 2011, AISTATS.

[18]  Tat-Seng Chua,et al.  NUS-WIDE: a real-world web image database from National University of Singapore , 2009, CIVR '09.

[19]  Stefien Bickel,et al.  ECML-PKDD Discovery Challenge 2006 Overview , 2006 .

[20]  Jiayu Zhou,et al.  Integrating low-rank and group-sparse structures for robust multi-task learning , 2011, KDD.

[21]  Giuseppe De Nicolao,et al.  Client–Server Multitask Learning From Distributed Datasets , 2008, IEEE Transactions on Neural Networks.

[22]  John Blitzer,et al.  Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification , 2007, ACL.

[23]  Xu Sun,et al.  Large-Scale Personalized Human Activity Recognition Using Online Multitask Learning , 2013, IEEE Transactions on Knowledge and Data Engineering.

[24]  Ali Jalali,et al.  A Dirty Model for Multi-task Learning , 2010, NIPS.

[25]  Rong Jin,et al.  Online AUC Maximization , 2011, ICML.

[26]  Volker Tresp,et al.  Robust multi-task learning with t-processes , 2007, ICML '07.

[27]  Massimiliano Pontil,et al.  Regularized multi--task learning , 2004, KDD.

[28]  Rong Jin,et al.  Online Multiple Kernel Learning: Algorithms and Mistake Bounds , 2010, ALT.

[29]  Koby Crammer,et al.  Adaptive regularization of weight vectors , 2009, Machine Learning.

[30]  Jinfeng Yi,et al.  Online Kernel Learning with a Near Optimal Sparsity Bound , 2013, ICML.

[31]  F ROSENBLATT,et al.  The perceptron: a probabilistic model for information storage and organization in the brain. , 1958, Psychological review.

[32]  Dit-Yan Yeung,et al.  A Convex Formulation for Learning Task Relationships in Multi-Task Learning , 2010, UAI.

[33]  Steven C. H. Hoi,et al.  Exact Soft Confidence-Weighted Learning , 2012, ICML.

[34]  Koby Crammer,et al.  Online Passive-Aggressive Algorithms , 2003, J. Mach. Learn. Res..

[35]  Koby Crammer,et al.  Exact Convex Confidence-Weighted Learning , 2008, NIPS.

[36]  Massimiliano Pontil,et al.  Multi-Task Feature Learning , 2006, NIPS.