Toward Evolving Neural Networks using Bio-Inspired Algorithms

The SWarm Intelligence-based Reinforcement Learning (SWIRL) method is proposed in this paper to efficiently generate Artificial Neural Network (ANN) based solutions to various problems. Basically, two swarm intelligence based algorithms are combined together in SWIRL to train the ANN models. Ant Colony Optimization (ACO) is applied to optimize ANN topology, while Particle Swarm Optimization (PSO) is applied to adjust ANN connection weights. To evaluate the performance of the ANN models trained by SWIRL, the XOR and double pole balance problem are utilized as case studies. Extensive simulation results successfully demonstrate that SWIRL offers performance that is competitive with modern neuroevolutionary techniques, as well as its viability for realworld problems.

[1]  Bahram Alidaee,et al.  Global optimization for artificial neural networks: A tabu search application , 1998, Eur. J. Oper. Res..

[2]  C. D. Gelatt,et al.  Optimization by Simulated Annealing , 1983, Science.

[3]  Risto Miikkulainen,et al.  Evolving Neural Networks through Augmenting Topologies , 2002, Evolutionary Computation.

[4]  Martin A. Riedmiller,et al.  Rprop - Description and Implementation Details , 1994 .

[5]  Marco Dorigo,et al.  Ant system: optimization by a colony of cooperating agents , 1996, IEEE Trans. Syst. Man Cybern. Part B.

[6]  Frans van den Bergh,et al.  An analysis of particle swarm optimizers , 2002 .

[7]  Randall S. Sexton,et al.  Optimization of neural networks: A comparative analysis of the genetic algorithm and simulated annealing , 1999, Eur. J. Oper. Res..

[8]  David B. Fogel,et al.  Evolving Neural Control Systems , 1995, IEEE Expert.

[9]  Lenka Lhotská,et al.  PSO and ACO in Optimization Problems , 2006, IDEAL.

[10]  Enrique Alba,et al.  Training Neural Networks with GA Hybrid Algorithms , 2004, GECCO.

[11]  Teresa Bernarda Ludermir,et al.  Hybrid Training of Feed-Forward Neural Networks with Particle Swarm Optimization , 2006, ICONIP.

[12]  Simon Haykin,et al.  Neural Networks: A Comprehensive Foundation , 1998 .

[13]  Tamás D. Gedeon,et al.  Simulated annealing and weight decay in adaptive learning: the SARPROP algorithm , 1998, IEEE Trans. Neural Networks.

[14]  A. P. Wieland,et al.  Evolving neural network controllers for unstable systems , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[15]  John H. Holland,et al.  Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence , 1992 .

[16]  Teresa Bernarda Ludermir,et al.  An Optimization Methodology for Neural Network Weights and Architectures , 2006, IEEE Transactions on Neural Networks.

[17]  Trevor P Martin,et al.  Intelligent Data Engineering and Automated Learning , 2004 .

[18]  Christian Blum,et al.  Training feed-forward neural networks with ant colony optimization: an application to pattern classification , 2005, Fifth International Conference on Hybrid Intelligent Systems (HIS'05).

[19]  Risto Miikkulainen,et al.  Solving Non-Markovian Control Tasks with Neuro-Evolution , 1999, IJCAI.

[20]  James Kennedy,et al.  Particle swarm optimization , 1995, Proceedings of ICNN'95 - International Conference on Neural Networks.

[21]  Hava T. Siegelmann,et al.  Analog computation via neural networks , 1993, [1993] The 2nd Israel Symposium on Theory and Computing Systems.

[22]  Fred W. Glover,et al.  Future paths for integer programming and links to artificial intelligence , 1986, Comput. Oper. Res..

[23]  A. E. Eiben,et al.  Introduction to Evolutionary Computing , 2003, Natural Computing Series.