A Multitask Learning Model for Online Pattern Recognition

This paper presents a new learning algorithm for multitask pattern recognition (MTPR) problems. We consider learning multiple multiclass classification tasks online where no information is ever provided about the task category of a training example. The algorithm thus needs an automated task recognition capability to properly learn the different classification tasks. The learning mode is ldquoonlinerdquo where training examples for different tasks are mixed in a random fashion and given sequentially one after another. We assume that the classification tasks are related to each other and that both the tasks and their training examples appear in random during ldquoonline training.rdquo Thus, the learning algorithm has to continually switch from learning one task to another whenever the training examples change to a different task. This also implies that the learning algorithm has to detect task changes automatically and utilize knowledge of previous tasks for learning new tasks fast. The performance of the algorithm is evaluated for ten MTPR problems using five University of California at Irvine (UCI) data sets. The experiments verify that the proposed algorithm can indeed acquire and accumulate task knowledge and that the transfer of knowledge from tasks already learned enhances the speed of knowledge acquisition on new tasks and the final classification accuracy. In addition, the task categorization accuracy is greatly improved for all MTPR problems by introducing the reorganization process even if the presentation order of class training examples is fairly biased.

[1]  Stephen Grossberg,et al.  The ART of adaptive pattern recognition by a self-organizing neural network , 1988, Computer.

[2]  Shigeo Abe,et al.  Incremental learning of feature space and classifier for face recognition , 2005, Neural Networks.

[3]  Shai Ben-David,et al.  Detecting Change in Data Streams , 2004, VLDB.

[4]  Phil D. Green,et al.  Multitask learning in connectionist robust ASR using recurrent neural networks , 2003, INTERSPEECH.

[5]  Sebastian Thrun,et al.  Learning One More Thing , 1994, IJCAI.

[6]  Robert E. Mercer,et al.  The Task Rehearsal Method of Life-Long Learning: Overcoming Impoverished Data , 2002, Canadian Conference on AI.

[7]  Somnath Mukhopadhyay,et al.  A polynomial time algorithm for the construction and training of a class of multilayer perceptrons , 1993, Neural Networks.

[8]  C A Nelson,et al.  Learning to Learn , 2017, Encyclopedia of Machine Learning and Data Mining.

[9]  Shaoning Pang,et al.  Incremental linear discriminant analysis for classification of data streams , 2005, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[10]  Nikola Kasabov,et al.  Evolving connectionist systems , 2002 .

[11]  Tom Heskes,et al.  Task Clustering and Gating for Bayesian Multitask Learning , 2003, J. Mach. Learn. Res..

[12]  Rich Caruana,et al.  Learning Many Related Tasks at the Same Time with Backpropagation , 1994, NIPS.

[13]  Jonathan Baxter,et al.  A Bayesian/Information Theoretic Model of Learning to Learn via Multiple Task Sampling , 1997, Machine Learning.

[14]  Jack Mostow,et al.  Direct Transfer of Learned Information Among Neural Networks , 1991, AAAI.

[15]  Seiichi Ozawa,et al.  Incremental learning in dynamic environments using neural network with long-term memory , 2003, Proceedings of the International Joint Conference on Neural Networks, 2003..

[16]  Yoshua Bengio,et al.  Bias learning, knowledge sharing , 2003, IEEE Trans. Neural Networks.

[17]  Nikola Kasabov,et al.  Evolving Connectionist Systems: Methods and Applications in Bioinformatics, Brain Study and Intelligent Machines , 2002, IEEE Transactions on Neural Networks.

[18]  John C. Platt A Resource-Allocating Network for Function Interpolation , 1991, Neural Computation.

[19]  Asim Roy,et al.  An algorithm to generate radial basis function (RBF)-like nets for classification problems , 1995, Neural Networks.

[20]  Shigeo Abe,et al.  Reducing computations in incremental learning for feedforward neural network with long-term memory , 2001, IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222).

[21]  Asim Roy,et al.  A neural-network learning theory and a polynomial time RBF algorithm , 1997, IEEE Trans. Neural Networks.

[22]  Rich Caruana,et al.  Multitask Learning , 1998, Encyclopedia of Machine Learning and Data Mining.

[23]  Yaser S. Abu-Mostafa,et al.  Learning from hints in neural networks , 1990, J. Complex..

[24]  D. Silver,et al.  Selective Functional Transfer : Inductive Bias from Related Tasks , 2001 .

[25]  Rich Caruana,et al.  Multitask pattern recognition for autonomous robots , 1998, Proceedings. 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems. Innovations in Theory, Practice and Applications (Cat. No.98CH36190).

[26]  James L. McClelland,et al.  Autonomous Mental Development by Robots and Animals , 2001, Science.

[27]  Somnath Mukhopadhyay,et al.  Iterative generation of higher-order nets in polynomial time using linear programming , 1997, IEEE Trans. Neural Networks.

[28]  Shaoning Pang,et al.  Incremental Learning of Chunk Data for Online Pattern Classification Systems , 2008, IEEE Transactions on Neural Networks.