SpikeProp: backpropagation for networks of spiking neurons
暂无分享,去创建一个
For a network of spiking neurons with reasonable postsynaptic potentials, we derive a supervised learning rule akin to traditional error-back-propagation, SpikeProp and show how to overcome the discontinuities introduced by thresholding. Using this learning algorithm, we demonstrate how networks of spiking neurons with biologically plausible time-constants can perform complex non-linear classi cation in fast temporal coding just as well as rate-coded networks. When comparing the (implicit) number of neurons required for the respective encodings, it is empirically demonstrated that temporal coding potentially requires signi cantly less neurons.
[1] N. Rashevsky. Mathematical Biophysics , 1935, Nature.
[2] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[3] Wolfgang Maass,et al. Lower Bounds for the Computational Power of Networks of Spiking Neurons , 1996, Neural Computation.
[4] T Natschläger,et al. Spatial and temporal pattern analysis via spiking neurons. , 1998, Network.
[5] Wolfgang Maass,et al. Paradigms for Computing with Spiking Neurons , 2002 .