Evolutionary learning for improving performance of robot navigation

This paper presents the application of evolutionary learning techniques for improving performance of robot navigation. The goal is to build an intelligent control algorithm that drives the robot in an unknown environment at the maximum allowable speed, while avoiding obstacles and keeping its rate of turns to a minimum. The robot controller is based on an artificial neural network that takes inputs from range sensors and produces outputs to control the drive motors. The ANN is evolved using a simple genetic algorithm. Two different evolutionary learning approaches are evaluated. In the first approach synaptic weights of the network are evolved, while in the second one the adaptation rules of the synapses are evolved. At the end of the evolutionary processes, both solutions resulted in best performing controllers, that can avoid collisions while maximizing linear speed and minimizing turning.

[1]  Jeffrey L. Krichmar,et al.  Evolutionary robotics: The biology, intelligence, and technology of self-organizing machines , 2001, Complex..

[2]  B. Rohrer,et al.  S-Learning: A Biomimetic Algorithm for Learning, Memory, and Control in Robots , 2007, 2007 3rd International IEEE/EMBS Conference on Neural Engineering.

[3]  A. E. Eiben,et al.  Introduction to Evolutionary Computing , 2003, Natural Computing Series.

[4]  Fred Rothganger,et al.  Model-Free Learning and Control in a Mobile Robot , 2009, 2009 Fifth International Conference on Natural Computation.

[5]  Martin A. Riedmiller,et al.  Self-learning neural control of a mobile robot , 1995, Proceedings of ICNN'95 - International Conference on Neural Networks.

[6]  Chris Watkins,et al.  Learning from delayed rewards , 1989 .

[7]  Dario Floreano,et al.  Evolutionary Robotics: The Next Generation , 2000 .

[8]  David E. Goldberg,et al.  Genetic Algorithms in Search Optimization and Machine Learning , 1988 .