Event-driven random backpropagation: Enabling neuromorphic deep learning machines

An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. The gradient descent back-propagation rule is a powerful algorithm that is ubiquitous in deep learning, but it relies on the immediate availability of network-wide information stored with high-precision memory. However, recent work shows that exact backpropagated weights are not essential for learning deep representations. Here, we demonstrate an event-driven random backpropagation (eRBP) rule that uses an error-modulated synaptic plasticity rule for learning deep representations in neuromorphic computing hardware. The rule is very suitable for implementation in neuromorphic hardware using a two-compartment leaky integrate & fire neuron and a membrane-voltage modulated, spike-driven plasticity rule. Our results show that using eRBP, deep representations are rapidly learned without using backpropagated gradients, achieving nearly identical classification accuracies compared to artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.

[1]  W. Gerstner,et al.  Connectivity reflects coding: a model of voltage-based STDP with homeostasis , 2010, Nature Neuroscience.

[2]  Wulfram Gerstner,et al.  Limits to high-speed simulations of spiking neural networks using general-purpose computers , 2014, Front. Neuroinform..

[3]  Ernst Niebur,et al.  A Generalized Linear Integrate-and-Fire Neural Model Produces Diverse Spiking Behaviors , 2009, Neural Computation.

[4]  Daniel Cownden,et al.  Random feedback weights support learning in deep neural networks , 2014, ArXiv.

[5]  Pierre Baldi,et al.  Learning in the Machine: Random Backpropagation and the Learning Channel , 2016, ArXiv.

[6]  Joel Z. Leibo,et al.  How Important Is Weight Symmetry in Backpropagation? , 2015, AAAI.

[7]  Somnath Paul,et al.  Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines , 2016, Front. Neurosci..

[8]  Philipp Slusallek,et al.  Introduction to real-time ray tracing , 2005, SIGGRAPH Courses.

[9]  Colin J. Akerman,et al.  Random synaptic feedback weights support error backpropagation for deep learning , 2016, Nature Communications.

[10]  Yoshua Bengio,et al.  Low precision arithmetic for deep learning , 2014, ICLR.

[11]  Yoshua Bengio,et al.  Target Propagation , 2015, ICLR.

[12]  Siddharth Joshi,et al.  Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines , 2015, Front. Neurosci..

[13]  Pierre Baldi,et al.  Learning in the machine: Random backpropagation and the deep learning channel , 2016, Artif. Intell..

[14]  Walter Senn,et al.  Learning Real-World Stimuli in a Neural Network with Spike-Driven Synaptic Dynamics , 2007, Neural Computation.

[15]  Hesham Mostafa,et al.  Supervised Learning Based on Temporal Coding in Spiking Neural Networks , 2016, IEEE Transactions on Neural Networks and Learning Systems.

[16]  Tobi Delbrück,et al.  Training Deep Spiking Neural Networks Using Backpropagation , 2016, Front. Neurosci..

[17]  G. Cauwenberghs,et al.  1.1 TMACS/mW Fine-Grained Stochastic Resonant Charge-Recycling Array Processor , 2012, IEEE Sensors Journal.

[18]  Andrew S. Cassidy,et al.  Convolutional networks for fast, energy-efficient neuromorphic computing , 2016, Proceedings of the National Academy of Sciences.

[19]  Jongkil Park,et al.  A 65k-neuron 73-Mevents/s 22-pJ/event asynchronous micro-pipelined integrate-and-fire array transceiver , 2014, 2014 IEEE Biomedical Circuits and Systems Conference (BioCAS) Proceedings.

[20]  Joseph Zambreno,et al.  ONAC: Optimal number of active cores detector for energy efficient GPU computing , 2016, 2016 IEEE 34th International Conference on Computer Design (ICCD).

[21]  Gert Cauwenberghs,et al.  Forward table-based presynaptic event-triggered spike-timing-dependent plasticity , 2016, 2016 IEEE Biomedical Circuits and Systems Conference (BioCAS).

[22]  Matthew Cook,et al.  Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing , 2015, 2015 International Joint Conference on Neural Networks (IJCNN).

[23]  Gert Cauwenberghs,et al.  Reverse engineering the cognitive brain , 2013, Proceedings of the National Academy of Sciences.

[24]  Wulfram Gerstner,et al.  SPIKING NEURON MODELS Single Neurons , Populations , Plasticity , 2002 .

[25]  W. Senn,et al.  Learning by the Dendritic Prediction of Somatic Spiking , 2014, Neuron.

[26]  Chiara Bartolozzi,et al.  Neuromorphic Electronic Circuits for Building Autonomous Cognitive Systems , 2014, Proceedings of the IEEE.

[27]  Gert Cauwenberghs,et al.  Event-driven contrastive divergence for spiking neuromorphic systems , 2013, Front. Neurosci..

[28]  Razvan V. Florian,et al.  Reinforcement Learning Through Modulation of Spike-Timing-Dependent Synaptic Plasticity , 2007, Neural Computation.