A learning rule for very simple universal approximators consisting of a single layer of perceptrons

One may argue that the simplest type of neural networks beyond a single perceptron is an array of several perceptrons in parallel. In spite of their simplicity, such circuits can compute any Boolean function if one views the majority of the binary perceptron outputs as the binary output of the parallel perceptron, and they are universal approximators for arbitrary continuous functions with values in [0,1] if one views the fraction of perceptrons that output 1 as the analog output of the parallel perceptron. Note that in contrast to the familiar model of a "multi-layer perceptron" the parallel perceptron that we consider here has just binary values as outputs of gates on the hidden layer. For a long time one has thought that there exists no competitive learning algorithm for these extremely simple neural networks, which also came to be known as committee machines. It is commonly assumed that one has to replace the hard threshold gates on the hidden layer by sigmoidal gates (or RBF-gates) and that one has to tune the weights on at least two successive layers in order to achieve satisfactory learning results for any class of neural networks that yield universal approximators. We show that this assumption is not true, by exhibiting a simple learning algorithm for parallel perceptrons - the parallel delta rule (p-delta rule). In contrast to backprop for multi-layer perceptrons, the p-delta rule only has to tune a single layer of weights, and it does not require the computation and communication of analog values with high precision. Reduced communication also distinguishes our new learning rule from other learning rules for parallel perceptrons such as MADALINE. Obviously these features make the p-delta rule attractive as a biologically more realistic alternative to backprop in biological neural circuits, but also for implementations in special purpose hardware. We show that the p-delta rule also implements gradient descent-with regard to a suitable error measure-although it does not require to compute derivatives. Furthermore it is shown through experiments on common real-world benchmark datasets that its performance is competitive with that of other learning approaches from neural networks and machine learning. It has recently been shown [Anthony, M. (2007). On the generalization error of fixed combinations of classifiers. Journal of Computer and System Sciences 73(5), 725-734; Anthony, M. (2004). On learning a function of perceptrons. In Proceedings of the 2004 IEEE international joint conference on neural networks (pp. 967-972): Vol. 2] that one can also prove quite satisfactory bounds for the generalization error of this new learning rule.

[1]  Bernard Widrow,et al.  30 years of adaptive neural networks: perceptron, Madaline, and backpropagation , 1990, Proc. IEEE.

[2]  M. Anthony On learning a function of perceptrons , 2004, 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541).

[3]  Simon Haykin,et al.  Neural Networks: A Comprehensive Foundation , 1998 .

[4]  H. D. Block The perceptron: a model for brain functioning. I , 1962 .

[5]  Gina Turrigiano,et al.  A Competitive Game of Synaptic Tag , 2004, Neuron.

[6]  Marwan A. Jabri,et al.  Weight perturbation: an optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks , 1992, IEEE Trans. Neural Networks.

[7]  D. Signorini,et al.  Neural networks , 1995, The Lancet.

[8]  Wolfgang Maass,et al.  Networks of Spiking Neurons: The Third Generation of Neural Network Models , 1996, Electron. Colloquium Comput. Complex..

[9]  Xiaohui Xie,et al.  Learning in neural networks by reinforcement of irregular spiking. , 2004, Physical review. E, Statistical, nonlinear, and soft matter physics.

[10]  Leon O. Chua,et al.  Fading memory and the problem of approximating nonlinear operators with volterra series , 1985 .

[11]  Emile Fiesler,et al.  Neural Network Adaptations to Hardware Implementations , 1997 .

[12]  Eduardo D. Sontag,et al.  Neural Systems as Nonlinear Filters , 2000, Neural Computation.

[13]  Wofgang Maas,et al.  Networks of spiking neurons: the third generation of neural network models , 1997 .

[14]  N. Caticha,et al.  On-line learning in the committee machine , 1995 .

[15]  Peter Auer,et al.  Reducing Communication for Distributed Learning in Neural Networks , 2002, ICANN.

[16]  Isabelle Guyon,et al.  Automatic Capacity Tuning of Very Large VC-Dimension Classifiers , 1992, NIPS.

[17]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[18]  E. Izhikevich Solving the distal reward problem through linkage of STDP and dopamine signaling , 2007, BMC Neuroscience.

[19]  Christopher J. Bishop,et al.  Pulsed Neural Networks , 1998 .

[20]  Russell Beale,et al.  Handbook of Neural Computation , 1996 .

[21]  Nello Cristianini,et al.  An introduction to Support Vector Machines , 2000 .

[22]  Wolfgang Maass,et al.  Neural Computation with Winner-Take-All as the Only Nonlinear Operation , 1999, NIPS.

[23]  Wolfgang Maass,et al.  On the Computational Power of Winner-Take-All , 2000, Neural Computation.

[24]  A. A. Mullin,et al.  Principles of neurodynamics , 1962 .

[25]  Martin Anthony On the generalization error of fixed combinations of classifiers , 2007, J. Comput. Syst. Sci..

[26]  Henry Markram,et al.  Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations , 2002, Neural Computation.

[27]  Ila R Fiete,et al.  Gradient learning in spiking neural networks by dynamic perturbation of conductances. , 2006, Physical review letters.

[28]  Yoav Freund,et al.  Large Margin Classification Using the Perceptron Algorithm , 1998, COLT.

[29]  Hendrik B. Geyer,et al.  Journal of Physics A - Mathematical and General, Special Issue. SI Aug 11 2006 ?? Preface , 2006 .

[30]  R. Douglas,et al.  Neuronal circuits of the neocortex. , 2004, Annual review of neuroscience.

[31]  L. Abbott,et al.  Synaptic plasticity: taming the beast , 2000, Nature Neuroscience.

[32]  Wulfram Gerstner,et al.  Spiking neurons , 1999 .

[33]  Nils J. Nilsson,et al.  The Mathematical Foundations of Learning Machines , 1990 .