Dynamically pruning output weights in an expanding multilayer perceptron neural network

The network size for a multilayer perceptron neural network is often chosen arbitrarily for different applications, and the optimum size of the network is determined by a long process of trial and error. This paper presents a backpropagation algorithm. For a multilayer perceptron (MLP) neural network, that dynamically determines the optimum number of hidden nodes and applies a new pruning technique on output weights. A 29% reduction in the total number of output weights was observed for a handwritten character recognition problem using the new pruning algorithm.

[1]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[2]  Richard P. Lippmann,et al.  An introduction to computing with neural nets , 1987 .

[3]  R.J.F. Dow,et al.  Neural net pruning-why and how , 1988, IEEE 1988 International Conference on Neural Networks.

[4]  Ehud D. Karnin,et al.  A simple procedure for pruning back-propagation trained neural networks , 1990, IEEE Trans. Neural Networks.

[5]  Russell Reed,et al.  Pruning algorithms-a survey , 1993, IEEE Trans. Neural Networks.

[6]  K.M. Curtis,et al.  Dynamically deactivating hidden neurons in a multilayer perceptron neural network , 1996, Proceedings of Third International Conference on Electronics, Circuits, and Systems.

[7]  Yoshio Hirose,et al.  Backpropagation algorithm which varies the number of hidden units , 1989, International 1989 Joint Conference on Neural Networks.