PPNN: a faster learning and better generalizing neural net

It is pointed out that the planar topology of the current backpropagation neural network (BPNN) sets limits to the solution of the slow convergence rate problem, local minima, and other problems associated with BPNN. The parallel probabilistic neural network (PPNN) using a novel neural network topology, stereotopology, is proposed to overcome these problems. The learning ability and the generation ability of BPNN and PPNN are compared for several problems. Simulation results show that PPNN was capable of learning various kinds of problems much faster than BPNN, and also generalized better than BPNN. It is shown that the faster, universal learnability of PPNN was due to the parallel characteristic of PPNN's stereotopology, and the better generalization ability came from the probabilistic characteristic of PPNN's memory retrieval rule.<<ETX>>

[1]  Richard P. Lippmann,et al.  An introduction to computing with neural nets , 1987 .

[2]  J. F. Shepanski Multilayer preceptron training using optimal estimation , 1988, Neural Networks.

[3]  Carsten Peterson,et al.  Explorations of the mean field theory learning algorithm , 1989, Neural Networks.

[4]  Geoffrey E. Hinton,et al.  A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..

[5]  J. F. Shepanski Fast learning in artificial neural systems: multilayer perceptron training using optimal estimation , 1988, IEEE 1988 International Conference on Neural Networks.

[6]  Richard P. Brent,et al.  Fast training algorithms for multilayer neural nets , 1991, IEEE Trans. Neural Networks.

[7]  Tom Tollenaere,et al.  SuperSAB: Fast adaptive back propagation with good scaling properties , 1990, Neural Networks.

[8]  Kurt Hornik,et al.  Neural networks and principal component analysis: Learning from examples without local minima , 1989, Neural Networks.

[9]  Thomas P. Vogl,et al.  Rescaling of variables in back propagation learning , 1991, Neural Networks.

[10]  Bernard Widrow,et al.  MADALINE RULE II: a training algorithm for neural networks , 1988, IEEE 1988 International Conference on Neural Networks.

[11]  Barbara Moore,et al.  Scalability issues in neural networks , 1988, Neural Networks.

[12]  Robert A. Jacobs,et al.  Increased rates of convergence through learning rate adaptation , 1987, Neural Networks.

[13]  Norio Baba,et al.  A new approach for finding the global minimum of error function of neural networks , 1989, Neural Networks.

[14]  Yoshio Hirose,et al.  Backpropagation algorithm which varies the number of hidden units , 1989, International 1989 Joint Conference on Neural Networks.