CVPR 2020 Continual Learning in Computer Vision Competition: Approaches, Results, Current Challenges and Future Directions

In the last few years, we have witnessed a renewed and fast-growing interest in continual learning with deep neural networks with the shared objective of making current AI systems more adaptive, efficient and autonomous. However, despite the significant and undoubted progress of the field in addressing the issue of catastrophic forgetting, benchmarking different continual learning approaches is a difficult task by itself. In fact, given the proliferation of different settings, training and evaluation protocols, metrics and nomenclature, it is often tricky to properly characterize a continual learning algorithm, relate it to other solutions and gauge its real-world applicability. The first Continual Learning in Computer Vision challenge held at CVPR in 2020 has been one of the first opportunities to evaluate different continual learning algorithms on a common hardware with a large set of shared evaluation metrics and 3 different settings based on the realistic CORe50 video benchmark. In this paper, we report the main results of the competition, which counted more than 79 teams registered, 11 finalists and 2300$ in prizes. We also summarize the winning approaches, current challenges and future research directions.

[1]  David Filliat,et al.  Don't forget, there is more than forgetting: new metrics for Continual Learning , 2018, ArXiv.

[2]  Marc'Aurelio Ranzato,et al.  Gradient Episodic Memory for Continual Learning , 2017, NIPS.

[3]  Rosa H. M. Chan,et al.  OpenLORIS-Object: A Dataset and Benchmark towards Lifelong Object Recognition , 2019, ArXiv.

[4]  Simon Haykin,et al.  GradientBased Learning Applied to Document Recognition , 2001 .

[5]  Atsuto Maki,et al.  A systematic study of the class imbalance problem in convolutional neural networks , 2017, Neural Networks.

[6]  Min Lin,et al.  Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning , 2020, ArXiv.

[7]  Davide Maltoni,et al.  CORe50: a New Dataset and Benchmark for Continuous Object Recognition , 2017, CoRL.

[8]  Tom Diethe,et al.  Bypassing Gradients Re-Projection with Episodic Memories in Online Continual Learning , 2020, ArXiv.

[9]  Richard Lippmann,et al.  Neural Network Classifiers Estimate Bayesian a posteriori Probabilities , 1991, Neural Computation.

[10]  Benjamin Beyret,et al.  The Animal-AI Olympics , 2019, Nature Machine Intelligence.

[11]  Marc'Aurelio Ranzato,et al.  On Tiny Episodic Memories in Continual Learning , 2019 .

[12]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[13]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[14]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[15]  Scott Sanner,et al.  Batch-level Experience Replay with Review for Continual Learning , 2020, ArXiv.

[16]  Michael McCloskey,et al.  Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem , 1989 .

[17]  Seyed Iman Mirzadeh,et al.  Understanding the Role of Training Regimes in Continual Learning , 2020, NeurIPS.

[18]  Davide Maltoni,et al.  Continuous Learning in Single-Incremental-Task Scenarios , 2018, Neural Networks.

[19]  Nathan D. Cahill,et al.  Memory Efficient Experience Replay for Streaming Learning , 2018, 2019 International Conference on Robotics and Automation (ICRA).

[20]  Stefan Wermter,et al.  Continual Lifelong Learning with Neural Networks: A Review , 2019, Neural Networks.

[21]  Joelle Pineau,et al.  Online Learned Continual Compression with Adaptive Quantization Modules , 2019, ICML.

[22]  Lorenzo Rosasco,et al.  Are we Done with Object Recognition? The iCub robot's Perspective , 2017, Robotics Auton. Syst..

[23]  Yarin Gal,et al.  Differentially Private Continual Learning , 2019, ArXiv.

[24]  Neil D. Lawrence,et al.  Dataset Shift in Machine Learning , 2009 .

[25]  Davide Maltoni,et al.  Latent Replay for Real-Time Continual Learning , 2019, ArXiv.

[26]  Peter A. Flach,et al.  Discriminative Representation Loss (DRL): A More Efficient Approach Than Gradient Re-projection in continual learning. , 2020 .

[27]  Vincenzo Lomonaco,et al.  Continual Learning with Deep Architectures , 2019 .

[28]  Kan Chen,et al.  Billion-scale semi-supervised learning for image classification , 2019, ArXiv.

[29]  David Filliat,et al.  Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges , 2020, Inf. Fusion.

[30]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[31]  Anthony V. Robins,et al.  Catastrophic Forgetting, Rehearsal and Pseudorehearsal , 1995, Connect. Sci..

[32]  Nathan D. Cahill,et al.  New Metrics and Experimental Paradigms for Continual Learning , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[33]  Isabelle Guyon,et al.  AutoML @ NeurIPS 2018 challenge: Design and Results , 2019, ArXiv.

[34]  R. French Catastrophic forgetting in connectionist networks , 1999, Trends in Cognitive Sciences.

[35]  Shiliang Pu,et al.  IROS 2019 Lifelong Robotic Vision: Object Recognition Challenge [Competitions] , 2020, IEEE Robotics Autom. Mag..