Adaptation of parameters of BP algorithm using learning automata

The backpropagation (BP) algorithm is a systematic method for training multilayer neural networks. Despite the many successful applications of backpropagation, it has many drawbacks. For complex problems it may require a long time to train the networks, and it may not train at all. Long training time can be the result of the non-optimal parameters. It is not easy to choose appropriate value of the parameters for a particular problem. In the paper, by interconnection of fixed structure learning automata (FSLA) to the feedforward neural networks, we apply learning automata scheme for adjusting these parameters based on the observation of random response of neural networks. The main motivation in using learning automata as an adaptation algorithm is to use its capability of global optimization when dealing with multi-model surface. The feasibility of proposed method is shown through simulations on three learning problems: exclusive-or, encoding problems, and digit recognition. The simulation results show that the adaptation of these parameters using this method not only increases the convergence rate of learning but it increases the likelihood of escaping from the local minima.

[1]  Xiao-Hu Yu,et al.  Efficient estimation of dynamically optimal learning rate and momentum for backpropagation learning , 1995, Proceedings of ICNN'95 - International Conference on Neural Networks.

[2]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[3]  Alessandro Sperduti,et al.  Speed up learning and network optimization with extended back propagation , 1993, Neural Networks.

[4]  Norio Baba,et al.  A consideration on the learning algorithm of neural network-utilization of the hierarchical structure stochastic automata for the backpropagation method with momentum , 1998, 1998 Second International Conference. Knowledge-Based Intelligent Electronic Systems. Proceedings KES'98 (Cat. No.98EX111).

[5]  H. Handa,et al.  Utilization of hierarchical structure stochastic automata for the back propagation method with momentum , 1995, Proceedings of ICNN'95 - International Conference on Neural Networks.

[6]  D. McLean,et al.  Improved interpolation and extrapolation from continuous training examples using a new neuronal model with an adaptive steepness , 1994, Proceedings of ANZIIS '94 - Australian New Zealnd Intelligent Information Systems Conference.

[7]  L. Guan,et al.  Artificial neural network power system stabiliser trained with an improved BP algorithm , 1996 .

[8]  Kumpati S. Narendra,et al.  Learning automata - an introduction , 1989 .

[9]  Yang Jia,et al.  Analysis of the misadjustment of BP network and an improved algorithm , 1993, 1993 IEEE International Symposium on Circuits and Systems.

[10]  Liu Ze-min,et al.  A LMS algorithm with stochastic momentum factor , 1992, [Proceedings] 1992 IEEE International Symposium on Circuits and Systems.