Controlled Forgetting: Targeted Stimulation and Dopaminergic Plasticity Modulation for Unsupervised Lifelong Learning in Spiking Neural Networks

Stochastic gradient descent requires that training samples be drawn from a uniformly random distribution of the data. For a deployed system that must learn online from an uncontrolled and unknown environment, the ordering of input samples often fails to meet this criterion, making lifelong learning a difficult challenge. We exploit the locality of the unsupervised Spike Timing Dependent Plasticity (STDP) learning rule to target subsets of a segmented Spiking Neural Network (SNN) to adapt to novel information while protecting the information in the remainder of the SNN from catastrophic forgetting. In our system, novel information triggers stimulated firing, inspired by biological dopamine signals, to boost STDP in the synapses of neurons associated with outlier information. This targeting controls the forgetting process in a way that reduces accuracy degradation while learning new information. Our preliminary results on the MNIST dataset validate the capability of such a system to learn successfully over time from an unknown, changing environment, achieving up to 93.88% accuracy for a completely disjoint dataset.

[1]  Ammar Belatreche,et al.  Dynamically Evolving Spiking Neural network for pattern recognition , 2015, 2015 International Joint Conference on Neural Networks (IJCNN).

[2]  Kaushik Roy,et al.  Unsupervised incremental STDP learning using forced firing of dormant or idle neurons , 2016, 2016 International Joint Conference on Neural Networks (IJCNN).

[3]  Simei Gomes Wysoski,et al.  On-Line Learning with Structural Adaptation in a Network of Spiking Neurons for Visual Pattern Recognition , 2006, ICANN.

[4]  Chrisantha Fernando,et al.  PathNet: Evolution Channels Gradient Descent in Super Neural Networks , 2017, ArXiv.

[5]  Matthew Cook,et al.  Unsupervised learning of digit recognition using spike-timing-dependent plasticity , 2015, Front. Comput. Neurosci..

[6]  Kaushik Roy,et al.  Cross-Layer Design Exploration for Energy-Quality Tradeoffs in Spiking and Non-Spiking Deep Artificial Neural Networks , 2018, IEEE Transactions on Multi-Scale Computing Systems.

[7]  Jason M. Allred,et al.  ASP: Learning to Forget With Adaptive Synaptic Plasticity in Spiking Neural Networks , 2017, IEEE Journal on Emerging and Selected Topics in Circuits and Systems.

[8]  Yann LeCun,et al.  The mnist database of handwritten digits , 2005 .

[9]  Ammar Belatreche,et al.  SpikeTemp: An Enhanced Rank-Order-Based Learning Approach for Spiking Neural Networks With Adaptive Structure , 2017, IEEE Transactions on Neural Networks and Learning Systems.

[10]  W. Gerstner,et al.  Neuromodulated Spike-Timing-Dependent Plasticity, and Theory of Three-Factor Learning Rules , 2016, Front. Neural Circuits.

[11]  Razvan Pascanu,et al.  Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.

[12]  Ammar Belatreche,et al.  An online supervised learning method for spiking neural networks with adaptive structure , 2014, Neurocomputing.

[13]  Jürgen Schmidhuber,et al.  Compete to Compute , 2013, NIPS.

[14]  Byoung-Tak Zhang,et al.  Overcoming Catastrophic Forgetting by Incremental Moment Matching , 2017, NIPS.

[15]  M. Sur,et al.  Locally coordinated synaptic plasticity of visual cortex neurons in vivo , 2018, Science.

[16]  Razvan Pascanu,et al.  Progressive Neural Networks , 2016, ArXiv.

[17]  Derek Hoiem,et al.  Learning without Forgetting , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Giacomo Indiveri,et al.  Online spatio-temporal pattern recognition with evolving spiking neural networks utilising address event representation, rank order, and temporal spike learning , 2012, The 2012 International Joint Conference on Neural Networks (IJCNN).