Deep Spiking Networks

We introduce an algorithm to do backpropagation on a spiking network. Our network is "spiking" in the sense that our neurons accumulate their activation into a potential over time, and only send out a signal (a "spike") when this potential crosses a threshold and the neuron is reset. Neurons only update their states when receiving signals from other neurons. Total computation of the network thus scales with the number of spikes caused by an input rather than network size. We show that the spiking Multi-Layer Perceptron behaves identically, during both prediction and training, to a conventional deep network of rectified-linear units, in the limiting case where we run the spiking network for a long time. We apply this architecture to a conventional classification problem (MNIST) and achieve performance very close to that of a conventional Multi-Layer Perceptron with the same architecture. Our network is a natural architecture for learning based on streaming event-based data, and is a stepping stone towards using spiking neural networks to learn efficiently on streaming data.

[1]  Yoshua Bengio,et al.  BinaryConnect: Training Deep Neural Networks with binary weights during propagations , 2015, NIPS.

[2]  Tobi Delbrück,et al.  A 128$\times$ 128 120 dB 15 $\mu$s Latency Asynchronous Temporal Contrast Vision Sensor , 2008, IEEE Journal of Solid-State Circuits.

[3]  T. Delbruck,et al.  > Replace This Line with Your Paper Identification Number (double-click Here to Edit) < 1 , 2022 .

[4]  Yoshua Bengio,et al.  Difference Target Propagation , 2014, ECML/PKDD.

[5]  Matthew Cook,et al.  Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing , 2015, 2015 International Joint Conference on Neural Networks (IJCNN).

[6]  André van Schaik,et al.  AER EAR: A Matched Silicon Cochlea Pair With Address Event Representation Interface , 2005, IEEE Transactions on Circuits and Systems I: Regular Papers.

[7]  Tobi Delbruck,et al.  Real-time classification and sensor fusion with a spiking deep belief network , 2013, Front. Neurosci..

[8]  Xiequan Fan,et al.  Hoeffding’s inequality for supermartingales , 2011, 1109.4359.

[9]  Sander M. Bohte,et al.  SpikeProp: backpropagation for networks of spiking neurons , 2000, ESANN.

[10]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[11]  Chris Eliasmith,et al.  Spiking Deep Networks with LIF Neurons , 2015, ArXiv.

[12]  Yoshua Bengio,et al.  An objective function for STDP , 2015, ArXiv.

[13]  Max Welling,et al.  Herding dynamical weights to learn , 2009, ICML '09.

[14]  Wolfgang Maass,et al.  Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons , 2011, PLoS Comput. Biol..

[15]  Gert Cauwenberghs,et al.  Event-driven contrastive divergence for spiking neuromorphic systems , 2013, Front. Neurosci..

[16]  Shih-Chii Liu,et al.  AER EAR: A Matched Silicon Cochlea Pair With Address Event Representation Interface , 2007, IEEE Trans. Circuits Syst. I Regul. Pap..

[17]  Yoshua Bengio,et al.  STDP as presynaptic activity times rate of change of postsynaptic activity , 2015, 1509.05936.

[18]  Koby Crammer,et al.  Adaptive regularization of weight vectors , 2009, Machine Learning.