Accelerating the standard backpropagation method using a genetic approach
暂无分享,去创建一个
Abstract Backpropagation is a well-known method for training feedforward neural networks and it usually requires a large number of iterations when a neural network is trained. In this paper it is shown that this number can be extremely reduced if after each iteration step the weights are changed by a simple mutation as it is done in genetic algorithms. In the examples described the computing time was reduced to 5.5% of backpropagation time in the average. The average number of iterations is reduced from 120 000 (backpropagation) to 1400.
[1] Lawrence Davis,et al. Training Feedforward Neural Networks Using Genetic Algorithms , 1989, IJCAI.
[2] Peter M. Todd,et al. Designing Neural Networks using Genetic Algorithms , 1989, ICGA.
[3] P. Schönemann. On artificial intelligence , 1985, Behavioral and Brain Sciences.
[4] Nostrand Reinhold,et al. the utility of using the genetic algorithm approach on the problem of Davis, L. (1991), Handbook of Genetic Algorithms. Van Nostrand Reinhold, New York. , 1991 .