Continual Learning Of Predictive Models In Video Sequences Via Variational Autoencoders

This paper proposes a method for performing continual learning of predictive models that facilitate the inference of future frames in video sequences. For a first given experience, an initial Variational Autoencoder, together with a set of fully connected neural networks are utilized to respectively learn the appearance of video frames and their dynamics at the latent space level. By employing an adapted Markov Jump Particle Filter, the proposed method recognizes new situations and integrates them as predictive models avoiding catastrophic forgetting of previously learned tasks. For evaluating the proposed method, this article uses video sequences from a vehicle that performs different tasks in a controlled environment.

[1]  Christoph H. Lampert,et al.  iCaRL: Incremental Classifier and Representation Learning , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Christopher I. Petkov,et al.  Auditory and Visual Sequence Learning in Humans and Monkeys using an Artificial Grammar Learning Paradigm , 2017, Neuroscience.

[3]  J. D. Smith,et al.  Primate cognition: attention, episodic memory, prospective memory, self-control, and metacognition as examples of cognitive control in nonhuman primates. , 2016, Wiley interdisciplinary reviews. Cognitive science.

[4]  Christopher Summerfield,et al.  Comparing continual task learning in minds and machines , 2018, Proceedings of the National Academy of Sciences.

[5]  Derek Hoiem,et al.  Learning without Forgetting , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  Carlo S. Regazzoni,et al.  Learning Switching Models for Abnormality Detection for Autonomous Driving , 2018, 2018 21st International Conference on Information Fusion (FUSION).

[7]  Rudolph van der Merwe,et al.  The unscented Kalman filter for nonlinear estimation , 2000, Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373).

[8]  Basam Musleh,et al.  Stereo Vision-based Local Occupancy Grid Map for Autonomous Navigation in ROS , 2016, VISIGRAPP.

[9]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[10]  R. Dukas Animal expertise: mechanisms, ecology and evolution , 2019, Animal Behaviour.

[11]  Richard E. Turner,et al.  Variational Continual Learning , 2017, ICLR.

[12]  Stefan Wermter,et al.  Continual Lifelong Learning with Neural Networks: A Review , 2019, Neural Networks.

[13]  J. Fagot,et al.  Evidence for large long-term memory capacities in baboons and pigeons and its implications for learning and the evolution of cognition , 2006, Proceedings of the National Academy of Sciences.

[14]  Surya Ganguli,et al.  Continual Learning Through Synaptic Intelligence , 2017, ICML.

[15]  OctoMiao Overcoming catastrophic forgetting in neural networks , 2016 .

[16]  Lifeng Sun,et al.  Adversarial Feature Alignment: Avoid Catastrophic Forgetting in Incremental Task Lifelong Learning , 2019, Neural Computation.

[17]  Leyre Castro,et al.  Cognitive flexibility and memory in pigeons, human children, and adults , 2018, Cognition.

[18]  Murray Shanahan,et al.  Large-scale network organization in the avian forebrain: a connectivity matrix and theoretical analysis , 2013, Front. Comput. Neurosci..

[19]  Diederik P. Kingma,et al.  An Introduction to Variational Autoencoders , 2019, Found. Trends Mach. Learn..

[20]  David Filliat,et al.  Generative Models from the perspective of Continual Learning , 2018, 2019 International Joint Conference on Neural Networks (IJCNN).