Evolving Neuro-Controllers and Sensors for Artificial Agents

Evolutionary algorithms, loosely inspired by biological evolutionary processes , have gained considerable popularity as tools for searching vast, complex , deceptive, and multimodal search spaces using little domain-specific knowledge [159, 157, 170]. In addition to their application in a variety of optimization problems, evolutionary algorithms have also been used to design control programs (e.g., artificial neural networks, finite-state automata, LISP programs, etc.) for a wide variety of robot In such cases, evolutionary search operates in the space of robot control programs, with each member of the population representing a robot behavior. By evaluating these behaviors on the target robot task and performing fitness-based selection/reproduction, evolution discovers robot behaviors (control programs) that lead to effective execution of the robot's task. Some researchers have also used artificial evolution to design robot sensors and their placements [175, 178, 148], tune sensor characteristics [167, 146], and even evolve robot body plans [165, 164]. Widespread interest in the use of artificial evolution in the design of robots and software agents has given birth to a field that is increasingly being referred to as evolutionary robotics. But why does one need an evolutionary approach for synthesizing robot behaviors? Robot behaviors often involve complex tradeoffs between multiple competing alternatives that are difficult to characterize a priori. And even in cases where they are identifiable, it is often hard to specify a priori, how to cope with these competing alternatives. For example, suppose a robot has the task of clearing a room by pushing boxes to the walls. Let us also assume that the robot has limited sensing ranges that prevent it from observing the contents of the entire room and it does not have any means to remember the positions of boxes it has observed in the past. Suppose this robot currently observes two boxes. Which one should it approach and push? Or should it ignore both boxes and continue its exploration to find another box to push? This decision is critical as it directly affects the subsequent behaviors of the robot. We may use heuristics such as approach the closer of the two boxes, but can we be sure that such a decision made at the local level will indeed lead to any kind of globally optimal behavior? Faced with such competing alternatives

[1]  Francesco Mondada,et al.  Automatic creation of an autonomous agent: genetic evolution of a neural-network driven robot , 1994 .

[2]  Marco Colombetti,et al.  Learning to control an autonomous robot by distributed genetic algorithms , 1993 .

[3]  John Hallam,et al.  Evolving robot morphology , 1997, Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC '97).

[4]  P. Husbands,et al.  Analysis of Evolved Sensory-motor Controllers Analysis of Evolved Sensory-motor Controllers , 1992 .

[5]  Inman Harvey,et al.  Analysing recurrent dynamical networks evolved for robot control , 1993 .

[6]  Inman Harvey,et al.  Noise and the Reality Gap: The Use of Simulation in Evolutionary Robotics , 1995, ECAL.

[7]  David E. Goldberg,et al.  Genetic Algorithms in Search Optimization and Machine Learning , 1988 .

[8]  D. Parisi,et al.  Preadaptation in populations of neural networks evolving in a changing environment , 1995 .

[9]  Vasant Honavar,et al.  Analysis of Neurocontrollers Designed by Simulated Evolution , 1995 .

[10]  Vasant Honavar,et al.  On sensor evolution in robotics , 1996 .

[11]  Andrew H. Fagg,et al.  Genetic programming approach to the construction of a neural network for control of a walking robot , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.

[12]  John H. Holland,et al.  Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence , 1992 .

[13]  Inman Harvey,et al.  Seeing the Light: Artiicial Evolution, Real Vision Seeing the Light: Artiicial Evolution, Real Vision , 1994 .

[14]  V. Braitenberg Vehicles, Experiments in Synthetic Psychology , 1984 .

[15]  Filippo Menczer,et al.  Maturation and the Evolution of Imitative Learning in Artificial Organisms , 1995, Adapt. Behav..

[16]  John R. Koza,et al.  Genetic evolution and co-evolution of computer programs , 1991 .

[17]  Vasant Honavar,et al.  Experiments in Evolutionary Synthesis of Robotic Neurocontrollers , 1996, AAAI/IAAI, Vol. 2.

[18]  Filippo Menczer,et al.  EVOLVING SENSORS IN ENVIRONMENTS OF CONTROLLED COMPLEXITY , 1994 .

[19]  D. R. McGregor,et al.  Designing application-specific neural networks using the structured genetic algorithm , 1992, [Proceedings] COGANN-92: International Workshop on Combinations of Genetic Algorithms and Neural Networks.

[20]  Vasant Honavar,et al.  Biologically inspired computational structures and processes for autonomous agents and robots , 1998 .