Continual Learning with Echo State Networks

Continual Learning (CL) refers to a learning setup where data is non stationary and the model has to learn without forgetting existing knowledge. The study of CL for sequential patterns revolves around trained recurrent networks. In this work, instead, we introduce CL in the context of Echo State Networks (ESNs), where the recurrent component is kept fixed. We provide the first evaluation of catastrophic forgetting in ESNs and we highlight the benefits in using CL strategies which are not applicable to trained recurrent models. Our results confirm the ESN as a promising model for CL and open to its use in streaming scenarios.

[1]  Razvan Pascanu,et al.  Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.

[2]  Yoshua Bengio,et al.  Toward Training Recurrent Neural Networks for Lifelong Learning , 2018, Neural Computation.

[3]  David Filliat,et al.  Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges , 2020, Inf. Fusion.

[4]  Alexander Ororbia,et al.  Spiking Neural Predictive Coding for Continual Learning from Data Streams , 2019, Neurocomputing.

[5]  Herbert Jaeger,et al.  Reservoir computing approaches to recurrent neural network training , 2009, Comput. Sci. Rev..

[6]  Philip H. S. Torr,et al.  GDumb: A Simple Approach that Questions Our Progress in Continual Learning , 2020, ECCV.

[7]  Simone Calderara,et al.  Avalanche: an End-to-End Library for Continual Learning , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[8]  Harald Haas,et al.  Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication , 2004, Science.

[9]  Continual Learning with Gated Incremental Memories for sequential data processing , 2020, 2020 International Joint Conference on Neural Networks (IJCNN).

[10]  Taisuke Kobayashi,et al.  Continual Learning Exploiting Structure of Fractal Reservoir Computing , 2019, ICANN.

[11]  Derek Hoiem,et al.  Learning without Forgetting , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Maneesh Sahani,et al.  Organizing recurrent network dynamics by task-computation to enable continual learning , 2020, NeurIPS.

[13]  Christopher Kanan,et al.  Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[14]  Davide Bacciu,et al.  Continual Learning for Recurrent Neural Networks: an Empirical Evaluation , 2021, Neural Networks.