A HYBRID GA/PSO TO EVOLVE ARTIFICIAL RECURRENT NEURAL NETWORKS

In this paper we propose a novel hybrid algorithm (GA/PSO) combining the strengths of particle swarm optimization with genetic algorithms to evolve the weights of recurrent neural networks. Particle swarm optimization and genetic algorithms are two optimization techniques that have proven to be successful in solving difficult problems, in particular both can successfully evolve recurrent neural networks. The hybrid algorithm combines the standard velocity and update rules of PSOs with the ideas of selection and crossover from GAs. We compare the hybrid algorithm to both the standard GA and PSO models, when evolving weights to two recurrent neural network problems. This paper presents results using the new hybrid algorithm and describes the hybrids benefits. INTRODUCTION Genetic algorithms (GA) and particles swarm optimization (PSO) are both population based algorithms that have proven to be successful in solving very difficult problems, including recurrent artificial neural networks (RANN). However, both models have strengths and weaknesses. Comparisons between GA and PSOs have been performed by both Eberhart (1998) and Angeline (1998) and both conclude that a hybrid of the standard GA and PSO models could lead to further advances. In this paper we present a novel hybrid algorithm, combining the strengths of GAs with those of PSO. The hybrid algorithm is compared to the standard GA and PSO models in evolving the weights of two RANN problems. Traditional neural network (NN) training algorithms, such as back propagation, are based on gradient descent. They systematically and incrementally reduce the output error. Although gradient descent approaches are very effective for a wide range of problems, they suffer from two significant drawbacks. First, they are generally restricted to finding local minimum. Second, they may get stuck in flat regions of a search space. In such cases the typical recourse is to restart the algorithm from a new random location. In contrast, population based searches are not restricted to a local search and can easily move across flat regions. However, they tend to be slower than gradient 1 This work is supported by NSF EPSCoR EPS-0132626. The experiments were performed on a Beowulf cluster built with funds from NSF grant EPS-80935 and a generous hardware donation form Micron Technologies.

[1]  J. Salerno,et al.  Using the particle swarm optimization technique to train a recurrent neural model , 1997, Proceedings Ninth IEEE International Conference on Tools with Artificial Intelligence.

[2]  M. Mandischer Evolving recurrent neural networks with non-binary encoding , 1995, Proceedings of 1995 IEEE International Conference on Evolutionary Computation.

[3]  Peter J. Angeline,et al.  An evolutionary algorithm that constructs recurrent neural networks , 1994, IEEE Trans. Neural Networks.

[4]  Peter J. Angeline,et al.  Structural and Behavioral Evolution of Recurrent Networks , 1993, NIPS.

[5]  Russell C. Eberhart,et al.  Comparison between Genetic Algorithms and Particle Swarm Optimization , 1998, Evolutionary Programming.

[6]  Y. Rahmat-Samii,et al.  Particle swarm, genetic algorithm, and their hybrids: optimization of a profiled corrugated horn antenna , 2002, IEEE Antennas and Propagation Society International Symposium (IEEE Cat. No.02CH37313).

[7]  Peter J. Angeline,et al.  Evolutionary Optimization Versus Particle Swarm Optimization: Philosophy and Performance Differences , 1998, Evolutionary Programming.

[8]  Terence Soule,et al.  Comparison of Genetic Algorithm and Particle Swarm Optimizer When Evolving a Recurrent Neural Network , 2003, GECCO.

[9]  Barbara Hammer,et al.  Learning with recurrent neural networks , 2000 .

[10]  Thomas Kiel Rasmussen,et al.  Hybrid Particle Swarm Optimiser with breeding and subpopulations , 2001 .

[11]  C. Lee Giles,et al.  An experimental comparison of recurrent neural networks , 1994, NIPS.