Efficient Genetic Algorithms for Training Layered Feedback Neural Networks

Abstract Recent research on the use of genetic algorithms for training neural networks has led to controversy as to whether or not this approach is more efficient than the more traditional backpropagation algorithm. Each of these approaches has developed from different backgrounds and each has its own set of advantages and disadvantages. In this paper, we propose three genetic algorithms, developed specifically for the task of training layered feedforward neural networks, that use an adaptive technique which takes advantage of the network architecture. We also describe several simulation experiments which were conducted to assess the genetic algorithms and to compare them with backpropagation. The experimental results show that our algorithms are more efficient in terms of speed of convergence. However, we recognize that a hybrid approach which incorporates the advantages of both the backpropagation and genetic algorithm techniques might be a more practical approach.