On the Complexity of Learning for Spiking Neurons with Temporal Coding

Abstract Spiking neurons are models for the computational units in biological neural systems where information is considered to be encoded mainly in the temporal patterns of their activity. In a network of spiking neurons a new set of parameters becomes relevant which has no counterpart in traditional neural network models: the time that a pulse needs to travel through a connection between two neurons (also known as delay of a connection). It is known that these delays are tuned in biological neural systems through a variety of mechanisms. In this article we consider the arguably most simple model for a spiking neuron, which can also easily be implemented in pulsed VLSI. We investigate the Vapnik–Chervonenkis (VC) dimension of networks of spiking neurons, where the delays are viewed as programmable parameters and we prove tight bounds for this VC dimension. Thus, we get quantitative estimates for the diversity of functions that a network with fixed architecture can compute with different settings of its delays. In particular, it turns out that a network of spiking neurons with k adjustable delays is able to compute a much richer class of functions than a threshold circuit with k adjustable weights. The results also yield bounds for the number of training examples that an algorithm needs for tuning the delays of a network of spiking neurons. Results about the computational complexity of such algorithms are also given.

[1]  Henry C. Tuckwell,et al.  Introduction to theoretical neurobiology , 1988 .

[2]  Wolfgang Maass,et al.  Networks of Spiking Neurons: The Third Generation of Neural Network Models , 1996, Electron. Colloquium Comput. Complex..

[3]  Thomas M. Cover,et al.  Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition , 1965, IEEE Trans. Electron. Comput..

[4]  Leslie G. Valiant,et al.  A theory of the learnable , 1984, STOC '84.

[5]  David Haussler,et al.  Learnability and the Vapnik-Chervonenkis dimension , 1989, JACM.

[6]  Hans Ulrich Simon,et al.  Robust Trainability of Single Neurons , 1995, J. Comput. Syst. Sci..

[7]  Alan F. Murray,et al.  Analogue Neural Vlsi: A Pulse Stream Approach , 1994 .

[8]  Saburo Muroga,et al.  Threshold logic and its applications , 1971 .

[9]  David S. Johnson,et al.  Computers and Intractability: A Guide to the Theory of NP-Completeness , 1978 .

[10]  Eduardo D. Sontag,et al.  Finiteness results for sigmoidal “neural” networks , 1993, STOC.

[11]  W. Gerstner,et al.  Time structure of the activity in neural network models. , 1995, Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics.

[12]  I Segev,et al.  Signal delay and input synchronization in passive dendritic structures. , 1993, Journal of neurophysiology.

[13]  Wolfgang Maass,et al.  Fast Sigmoidal Networks via Spiking Neurons , 1997, Neural Computation.

[14]  David Haussler,et al.  Decision Theoretic Generalizations of the PAC Model for Neural Net and Other Learning Applications , 1992, Inf. Comput..

[15]  Wulfram Gerstner,et al.  A neuronal learning rule for sub-millisecond temporal coding , 1996, Nature.

[16]  Paul W. Goldberg,et al.  Bounding the Vapnik-Chervonenkis Dimension of Concept Classes Parameterized by Real Numbers , 1993, COLT '93.

[17]  Saburo Muroga,et al.  Lower Bound of the Number of Threshold Functions , 1966, IEEE Trans. Electron. Comput..

[18]  Christopher J. Bishop,et al.  Pulsed Neural Networks , 1998 .

[19]  Ludwig Schl fli Theorie der vielfachen Kontinuit??t , 1950 .

[20]  Hans Ulrich Simon,et al.  Robust Trainability of Single Neurons , 1995, J. Comput. Syst. Sci..

[21]  Eduardo D. Sontag,et al.  Neural Networks with Quadratic VC Dimension , 1995, J. Comput. Syst. Sci..

[22]  L. Schläfli Theorie der vielfachen Kontinuität , 1901 .

[23]  Wofgang Maas,et al.  Networks of spiking neurons: the third generation of neural network models , 1997 .

[24]  Barak A. Pearlmutter,et al.  VC Dimension of an Integrate-and-Fire Neuron Model , 1996, Neural Computation.

[25]  Leslie G. Valiant,et al.  Learning Boolean formulas , 1994, JACM.

[26]  Ronald L. Rivest,et al.  Training a 3-node neural network is NP-complete , 1988, COLT '88.