Bio-Inspired Spiking Convolutional Neural Network using Layer-wise Sparse Coding and STDP Learning

Hierarchical feature discovery using non-spiking convolutional neural networks (CNNs) has attracted much recent interest in machine learning and computer vision. However, it is still not well understood how to create a biologically plausible network of brain-like, spiking neurons with multi-layer, unsupervised learning. This paper explores a novel bio-inspired spiking CNN that is trained in a greedy, layer-wise fashion. The proposed network consists of a spiking convolutional-pooling layer followed by a feature discovery layer extracting independent visual features. Kernels for the convolutional layer are trained using local learning. The learning is implemented using a sparse, spiking auto-encoder representing primary visual features. The feature discovery layer extracts independent features by probabilistic, leaky integrate-and-fire (LIF) neurons that are sparsely active in response to stimuli. The layer of the probabilistic, LIF neurons implicitly provides lateral inhibitions to extract sparse and independent features. Experimental results show that the convolutional layer is stack-admissible, enabling it to support a multi-layer learning. The visual features obtained from the proposed probabilistic LIF neurons in the feature discovery layer are utilized for training a classifier. Classification results contribute to the independent and informative visual features extracted in a hierarchy of convolutional and feature discovery layers. The proposed model is evaluated on the MNIST digit dataset using clean and noisy images. The recognition performance for clean images is above 98%. The performance loss for recognizing the noisy images is in the range 0.1% to 8.5% depending on noise types and densities. This level of performance loss indicates that the network is robust to additive noise.

[1]  Anthony S. Maida,et al.  Training a Hidden Markov Model with a Bayesian Spiking Neural Network , 2016, Journal of Signal Processing Systems.

[2]  Christopher M. Bishop,et al.  Pattern Recognition and Machine Learning (Information Science and Statistics) , 2006 .

[3]  Wolfgang Maass,et al.  STDP enables spiking neurons to detect hidden causes of their inputs , 2009, NIPS.

[4]  Tobi Delbruck,et al.  Real-time classification and sensor fusion with a spiking deep belief network , 2013, Front. Neurosci..

[5]  Kendra S. Burbank Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons , 2015, PLoS Comput. Biol..

[6]  Wolfgang Maass,et al.  Emergence of Dynamic Memory Traces in Cortical Microcircuit Models through STDP , 2013, The Journal of Neuroscience.

[7]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[8]  Timothée Masquelier,et al.  Acquisition of visual features through probabilistic spike-timing-dependent plasticity , 2016, 2016 International Joint Conference on Neural Networks (IJCNN).

[9]  Wolfgang Maass,et al.  Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity , 2013, PLoS Comput. Biol..

[10]  Nikil D. Dutt,et al.  Categorization and decision-making in a neurobiologically plausible spiking network using a STDP-like learning rule , 2013, Neural Networks.

[11]  Konrad Paul Kording,et al.  Bayesian integration in sensorimotor learning , 2004, Nature.

[12]  Jochen Triesch,et al.  Independent Component Analysis in Spiking Neurons , 2010, PLoS Comput. Biol..

[13]  Wolfgang Maass,et al.  Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons , 2011, PLoS Comput. Biol..

[14]  Timothée Masquelier,et al.  Learning to recognize objects using waves of spikes and Spike Timing-Dependent Plasticity , 2010, The 2010 International Joint Conference on Neural Networks (IJCNN).

[15]  Deepak Khosla,et al.  Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition , 2014, International Journal of Computer Vision.

[16]  Anthony S. Maida,et al.  Multi-layer unsupervised learning in a spiking convolutional neural network , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).

[17]  Timothée Masquelier,et al.  Unsupervised Learning of Visual Features through Spike Timing Dependent Plasticity , 2007, PLoS Comput. Biol..

[18]  Anthony S. Maida,et al.  A spiking network that learns to extract spike signatures from speech signals , 2016, Neurocomputing.

[19]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[20]  David J. Field,et al.  Emergence of simple-cell receptive field properties by learning a sparse code for natural images , 1996, Nature.

[21]  Timothée Masquelier,et al.  STDP-based spiking deep neural networks for object recognition , 2016, Neural Networks.

[22]  Simon J. Thorpe,et al.  Sparse spike coding in an asynchronous feed-forward multi-layer neural network using matching pursuit , 2004, Neurocomputing.

[23]  Tobi Delbrück,et al.  Training Deep Spiking Neural Networks Using Backpropagation , 2016, Front. Neurosci..

[24]  Terrence J. Sejnowski,et al.  The “independent components” of natural scenes are edge filters , 1997, Vision Research.

[25]  Gert Cauwenberghs,et al.  Event-driven contrastive divergence for spiking neuromorphic systems , 2013, Front. Neurosci..

[26]  Kaushik Roy,et al.  Unsupervised regenerative learning of hierarchical features in Spiking Deep Networks for object recognition , 2016, 2016 International Joint Conference on Neural Networks (IJCNN).

[27]  Sinan Kalkan,et al.  Deep Hierarchies in the Primate Visual Cortex: What Can We Learn for Computer Vision? , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[28]  Rajesh P. N. Rao,et al.  Bayesian brain : probabilistic approaches to neural coding , 2006 .

[29]  David Kappel,et al.  STDP Installs in Winner-Take-All Circuits an Online Approximation to Hidden Markov Model Learning , 2014, PLoS Comput. Biol..

[30]  Peter Dayan,et al.  Probabilistic Computation in Spiking Populations , 2004, NIPS.

[31]  Ammar Belatreche,et al.  Bio-inspired hierarchical framework for multi-view face detection and pose estimation , 2015, 2015 International Joint Conference on Neural Networks (IJCNN).

[32]  Yee Whye Teh,et al.  A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.

[33]  Timothée Masquelier,et al.  Bio-inspired unsupervised learning of visual features leads to robust invariant object recognition , 2015, Neurocomputing.

[34]  P. Földiák,et al.  Forming sparse representations by local anti-Hebbian learning , 1990, Biological Cybernetics.

[35]  Wulfram Gerstner,et al.  Predicting spike timing of neocortical pyramidal neurons by simple threshold models , 2006, Journal of Computational Neuroscience.

[36]  Taeho Jo,et al.  Improving Protein Fold Recognition by Deep Learning Networks , 2015, Scientific Reports.

[37]  Katsuhiko Mori,et al.  Convolutional spiking neural network model for robust face detection , 2002, Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02..

[38]  Y. Dan,et al.  Spike timing-dependent plasticity: from synapse to perception. , 2006, Physiological reviews.

[39]  Harold Pashler,et al.  Optimal Predictions in Everyday Cognition: The Wisdom of Individuals or Crowds? , 2008, Cogn. Sci..

[40]  Wolfgang Maass,et al.  Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity123 , 2016, eNeuro.

[41]  Joel Zylberberg,et al.  Inhibitory Interneurons Decorrelate Excitatory Cells to Drive Sparse Code Formation in a Spiking Model of V1 , 2013, The Journal of Neuroscience.

[42]  Simei Gomes Wysoski,et al.  Fast and adaptive network of spiking neurons for multi-view visual pattern recognition , 2008, Neurocomputing.

[43]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[44]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[45]  Michael Robert DeWeese,et al.  A Sparse Coding Model with Synaptically Local Plasticity and Spiking Neurons Can Account for the Diverse Shapes of V1 Simple Cell Receptive Fields , 2011, PLoS Comput. Biol..

[46]  Tara N. Sainath,et al.  Deep convolutional neural networks for LVCSR , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[47]  Matthew Cook,et al.  Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing , 2015, 2015 International Joint Conference on Neural Networks (IJCNN).

[48]  Martin Rehn,et al.  A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields , 2007, Journal of Computational Neuroscience.

[49]  Jürgen Schmidhuber,et al.  Deep learning in neural networks: An overview , 2014, Neural Networks.

[50]  Yoshua. Bengio,et al.  Learning Deep Architectures for AI , 2007, Found. Trends Mach. Learn..

[51]  Wulfram Gerstner,et al.  Variational Learning for Recurrent Spiking Networks , 2011, NIPS.

[52]  Rajesh P. N. Rao,et al.  Probabilistic Models of the Brain: Perception and Neural Function , 2002 .

[53]  Jean-Pascal Pfister,et al.  Sequence learning with hidden units in spiking neural networks , 2011, NIPS.

[54]  Yann LeCun,et al.  Learning Invariant Feature Hierarchies , 2012, ECCV Workshops.