Enabling Homeostasis using Temporal Decay Mechanisms in Spiking CNNs Trained with Unsupervised Spike Timing Dependent Plasticity

Convolutional Neural Networks(CNNs) have become the work horse for image classification tasks. This success has driven the exploration of Spike Time Dependent Plasticity (STDP) learning rule applied to the convolutional architecture for complex datasets as opposed to the fully connected architecture. Inhibitory neurons and adaptive threshold are widely adopted methods of inducing homeostasis in fully connected spiking networks to aid the unsupervised learning process. These methods ensure that all neurons have approximately equal firing activity across time and that their receptive fields are different, generally referred to as homeostatic behavior. While the adaptive threshold is straightforward to implement in spiking CNNs, adding in-hibitory neurons is not suitable to the convolutional architecture due to its shared weight nature. In this work, we first show that adaptive threshold in isolation is weak in obtaining approximate equal firing activity across activation maps in a spiking CNN. Next, we develop weight and offset decay mechanisms that enable the desired behavior to complement the STDP learning rule and adaptive threshold. We empirically show that these decay mechanisms improve feature learning as compared to baseline STDP in terms of accuracy (up to 1.4%) as well as enhanced homeostatic behavior among activation maps (more than halving the standard deviation). We discuss the complementary behavior of the decay mechanisms as compared to the adaptive threshold in terms of the variance in the activity induced. Finally, we show that when the convolutional features are trained on a subset of classes using STDP with decay mechanisms, the features learned are transferable to the subset of classes that are unseen to the convolutional layers. Thus, the decay mechanisms not only encourage the network to learn better features corresponding to the task being trained for but learn common structure prevalent among the classes while encouraging contribution from all activation maps. We perform experiments and present our findings on the Extended MNIST (EMNIST) dataset.

[1]  Anthony S. Maida,et al.  Multi-layer unsupervised learning in a spiking convolutional neural network , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).

[2]  Kaushik Roy,et al.  Unsupervised regenerative learning of hierarchical features in Spiking Deep Networks for object recognition , 2016, 2016 International Joint Conference on Neural Networks (IJCNN).

[3]  Timothée Masquelier,et al.  STDP-based spiking deep neural networks for object recognition , 2016, Neural Networks.

[4]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.

[5]  R. Kempter,et al.  Hebbian learning and spiking neurons , 1999 .

[6]  Matthew Cook,et al.  Unsupervised learning of digit recognition using spike-timing-dependent plasticity , 2015, Front. Comput. Neurosci..

[7]  Natalia Gimelshein,et al.  PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.

[8]  Kaushik Roy,et al.  ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing , 2019, Front. Neurosci..

[9]  Wulfram Gerstner,et al.  Localized random projections challenge benchmarks for bio-plausible deep learning , 2018 .

[10]  Gopalakrishnan Srinivasan,et al.  Deep Spiking Convolutional Neural Network Trained With Unsupervised Spike-Timing-Dependent Plasticity , 2019, IEEE Transactions on Cognitive and Developmental Systems.

[11]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[12]  Jason M. Allred,et al.  ASP: Learning to Forget With Adaptive Synaptic Plasticity in Spiking Neural Networks , 2017, IEEE Journal on Emerging and Selected Topics in Circuits and Systems.

[13]  Antoine Dupret,et al.  Event-Based, Timescale Invariant Unsupervised Online Deep Learning With STDP , 2018, Front. Comput. Neurosci..

[14]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[15]  A V Herz,et al.  Neural codes: firing rates and beyond. , 1997, Proceedings of the National Academy of Sciences of the United States of America.

[16]  S. Thorpe,et al.  STDP-based spiking deep convolutional neural networks for object recognition , 2018 .

[17]  Geoffrey E. Hinton,et al.  Similarity of Neural Network Representations Revisited , 2019, ICML.

[18]  Franck Mamalet,et al.  Unsupervised Feature Learning With Winner-Takes-All Based STDP , 2018, Front. Comput. Neurosci..