On-line backpropagation in two-layered neural networks

We present an exact analysis of learning a rule by on-line gradient descent in a two-layered neural network with adjustable hidden-to-output weights (backpropagation of error). Results are compared with the training of networks having the same architecture but fixed weights in the second layer.

[1]  Shun-ichi Amari,et al.  A Theory of Adaptive Pattern Classifiers , 1967, IEEE Trans. Electron. Comput..

[2]  Anders Krogh,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[3]  Sompolinsky,et al.  Statistical mechanics of learning from examples. , 1992, Physical review. A, Atomic, molecular, and optical physics.

[4]  Shun-ichi Amari,et al.  Backpropagation and stochastic gradient descent method , 1993, Neurocomputing.

[5]  T. Watkin,et al.  THE STATISTICAL-MECHANICS OF LEARNING A RULE , 1993 .

[6]  Yves Chauvin,et al.  Backpropagation: theory, architectures, and applications , 1995 .

[7]  N. Caticha,et al.  On-line learning in the committee machine , 1995 .

[8]  Michael Biehl,et al.  Learning by on-line gradient descent , 1995 .

[9]  Saad,et al.  Exact solution for on-line learning in multilayer neural networks. , 1995, Physical review letters.