Unsupervised Generative Variational Continual Learning

Continual learning aims at learning a sequence of tasks without forgetting any task. While most of the existing literature in continual learning is aimed at class incremental learning in a supervised setting, there is an enormous potential for unsupervised continual learning using generative models. This paper proposes a combination of architectural pruning and neuron addition in generative variational models toward unsupervised generative continual learning (UGCL). Evaluations on standard benchmark data sets demonstrate the superior generative ability of the proposed method.

[1]  Jennifer Dy,et al.  Deep Bayesian Unsupervised Lifelong Learning , 2021, Neural Networks.

[2]  Tinne Tuytelaars,et al.  A Continual Learning Survey: Defying Forgetting in Classification Tasks , 2019, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  Trevor Darrell,et al.  Uncertainty-guided Continual Learning with Bayesian Neural Networks , 2019, ICLR.

[4]  David Filliat,et al.  Generative Models from the perspective of Continual Learning , 2018, 2019 International Joint Conference on Neural Networks (IJCNN).

[5]  Stefan Wermter,et al.  Continual Lifelong Learning with Neural Networks: A Review , 2018, Neural Networks.

[6]  Svetlana Lazebnik,et al.  PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[7]  Richard E. Turner,et al.  Variational Continual Learning , 2017, ICLR.

[8]  Zhiqiang Shen,et al.  Learning Efficient Convolutional Networks through Network Slimming , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[9]  Surya Ganguli,et al.  Continual Learning Through Synaptic Intelligence , 2017, ICML.

[10]  Andrei A. Rusu,et al.  Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.

[11]  Alexander J. Smola,et al.  Laplace Propagation , 2003, NIPS.