Improved Multilayer Perceptron Design by Weighted Learning

This paper presents new relevant results on the application of the optimization of backpropagation algorithm by a weighting operation on an artificial neural network weights actualization during the learning phase. This modified backpropagation technique has been recently proposed by the author, and it is applied to a multilayer perceptron artificial neural network training in order to drastically improve the efficiency of the given training patterns. The purpose is to modify the mean square error (MSE) objective function in order to improve the training efficiency. We show how the application of the weighting function drastically accelerates training convergence whereas it maintains neural network's (NN) performance.

[1]  D. Andina,et al.  Importance sampling in neural detector training phase , 2004, Proceedings World Automation Congress, 2004..

[2]  Antonio Vega-Corona,et al.  Advances in Neyman-Pearson Neural Detectors Design , 2009, IWANN.

[3]  Jenq-Neng Hwang,et al.  Introduction to Neural Networks for Signal Processing , 2001, Handbook of Neural Network Signal Processing.

[4]  Jess Marcum,et al.  A statistical theory of target detection by pulsed radar , 1948, IRE Trans. Inf. Theory.

[5]  P. S. Sastry,et al.  Analysis of the back-propagation algorithm with momentum , 1994, IEEE Trans. Neural Networks.

[6]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.