ORACLE: Order Robust Adaptive Continual LEarning

The order of the tasks a continual learning model encounters may have large impact on the performance of each task, as well as the task-average performance. This order-sensitivity may cause serious problems in real-world scenarios where fairness plays a critical role (e.g. medical diagnosis). To tackle this problem, we propose a novel order-robust continual learning method, which instead of learning a completely shared set of weights, represent the parameters for each task as a sum of task-shared parameters that captures generic representations and task-adaptive parameters capturing task-specific ones, where the latter is factorized into sparse low-rank matrices in order to minimize capacity increase. With such parameter decomposition, when training for a new task, the task-adaptive parameters for earlier tasks remain mostly unaffected, where we update them only to reflect the changes made to the task-shared parameters. This prevents catastrophic forgetting for old tasks and at the same time make the model less sensitive to the task arrival order. We validate our Order-Robust Adaptive Continual LEarning (ORACLE) method on multiple benchmark datasets against state-of-the-art continual learning methods, and the results show that it largely outperforms those strong baselines with significantly less increase in capacity and training time, as well as obtains smaller performance disparity for each task with different order sequences.

[1]  Geoffrey E. Hinton,et al.  Distilling the Knowledge in a Neural Network , 2015, ArXiv.

[2]  Ruth Urner,et al.  Lifelong Learning with Weighted Majority Votes , 2016, NIPS.

[3]  Sebastian Thrun,et al.  A Lifelong Learning Perspective for Mobile Robot Control , 1994, IROS.

[4]  Derek Hoiem,et al.  Learning without Forgetting , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  Byoung-Tak Zhang,et al.  Overcoming Catastrophic Forgetting by Incremental Moment Matching , 2017, NIPS.

[6]  Hal Daumé,et al.  Learning Task Grouping and Overlap in Multi-task Learning , 2012, ICML.

[7]  Razvan Pascanu,et al.  Progressive Neural Networks , 2016, ArXiv.

[8]  Honglak Lee,et al.  Online Incremental Feature Learning with Denoising Autoencoders , 2012, AISTATS.

[9]  Yee Whye Teh,et al.  Progress & Compress: A scalable framework for continual learning , 2018, ICML.

[10]  Razvan Pascanu,et al.  Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.

[11]  Xiang Bai,et al.  Dynamic Multi-Task Learning with Convolutional Neural Network , 2017, IJCAI.

[12]  Zhanxing Zhu,et al.  Reinforced Continual Learning , 2018, NeurIPS.

[13]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[14]  Jiwon Kim,et al.  Continual Learning with Deep Generative Replay , 2017, NIPS.

[15]  Simon Haykin,et al.  GradientBased Learning Applied to Document Recognition , 2001 .

[16]  Sung Ju Hwang,et al.  Lifelong Learning with Dynamically Expandable Networks , 2017, ICLR.

[17]  Richard E. Turner,et al.  Variational Continual Learning , 2017, ICLR.