A weight evolution algorithm for finding the global minimum of error function in neural networks

This paper introduces a new weight evolution algorithm to find the global minimum of the error function in a multi-layered neural network. During the learning phase of backpropagation, the network weights are adjusted intentionally in order to have an improvement in system performance. By looking at the system outputs of the nodes, it is possible to adjust some of the network weights deterministically so as to achieve an overall reduction in system error. The idea is to work backward from the error components and the system outputs to deduce a deterministic perturbation on particular network weights for optimization purposes. Using the new algorithm, it is found that the weight evolution between the hidden and output layer can accelerate the convergence speed, whereas the weight evolution between the input layer and the hidden layer can assist in solving the local minima problem.

[1]  Xiao-Hu Yu,et al.  Can backpropagation error surface not have local minima , 1992, IEEE Trans. Neural Networks.

[2]  Shu-Hung Leung,et al.  Weight evolution algorithm with dynamic offset range , 1997, Proceedings of International Conference on Neural Networks (ICNN'97).

[3]  Andrew Luk,et al.  A hybrid algorithm of weight evolution and generalized back-propagation for finding global minimum , 1999, IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339).

[4]  X H Yu,et al.  On the local minima free condition of backpropagation learning , 1995, IEEE Trans. Neural Networks.

[5]  Martin A. Riedmiller,et al.  A direct adaptive method for faster backpropagation learning: the RPROP algorithm , 1993, IEEE International Conference on Neural Networks.

[6]  Arjen van Ooyen,et al.  Improving the convergence of the back-propagation algorithm , 1992, Neural Networks.