Action-selection and crossover strategies for self-modeling machines

In previous work [7] a computational framework was demonstrated that employs evolutionary algorithms to automatically model a given system. This is accomplished by alternating the evolution of models with the evolutionary search for new training data. Theory predicts [23] that the best new training data is that which induces maximum disagreement across the current model set. Here it is demonstrated that in a robot application this is not the case, and alternative fitness functions are developed that seek other, better training data. Also, it is shown that although crossover successfully reduces the mean error of the model set, it compromises the ability of the framework to find new, informative training data. This has implications for how to create adaptive, self-modeling machines, and suggests how competitive processes in the brain underlie the generation of intelligent behavior.

[1]  Phil Husbands,et al.  Once More Unto the Breach: Co-evolving a robot and its simulator , 2004 .

[2]  Hod Lipson,et al.  Automated robot function recovery after unanticipated failure or environmental change using a minimum of hardware trials , 2004, Proceedings. 2004 NASA/DoD Conference on Evolvable Hardware, 2004..

[3]  Hod Lipson,et al.  Nonlinear system identification using coevolution of models and tests , 2005, IEEE Transactions on Evolutionary Computation.

[4]  Hod Lipson,et al.  Resilient Machines Through Continuous Self-Modeling , 2006, Science.

[5]  W. Daniel Hillis,et al.  Co-evolving parasites improve simulated evolution as an optimization procedure , 1990 .

[6]  Mario Graff,et al.  System Identification Using Genetic Programming and Gene Expression Programming , 2005, ISCIS.

[7]  Hod Lipson,et al.  'Managed challenge' alleviates disengagement in co-evolutionary system identification , 2005, GECCO '05.

[8]  Hod Lipson,et al.  Once More Unto the Breach1: Co-evolving a robot and its simulator , 2004 .

[9]  Nick Jakobi,et al.  Evolutionary Robotics and the Radical Envelope-of-Noise Hypothesis , 1997, Adapt. Behav..

[10]  Edwin D. de Jong,et al.  Ideal Evaluation from Coevolution , 2004, Evolutionary Computation.

[11]  Jordan B. Pollack,et al.  Evolutionary Techniques in Physical Robotics , 2000, ICES.

[12]  Francesco Mondada,et al.  Hardware Solutions for Evolutionary Robotics , 1998 .

[13]  Nicholas Pippenger,et al.  An optimal brain can be composed of conflicting agents. , 2006, Proceedings of the National Academy of Sciences of the United States of America.

[14]  Hod Lipson,et al.  Automating Genetic Network Inference with Minimal Physical Experimentation Using Coevolution , 2004, GECCO.

[15]  Hod Lipson,et al.  Active Coevolutionary Learning of Deterministic Finite Automata , 2005, J. Mach. Learn. Res..

[16]  Ran El-Yaniv,et al.  Online Choice of Active Learning Algorithms , 2003, J. Mach. Learn. Res..

[17]  J. Bongard,et al.  Co‐evolutionary algorithm for structural damage identification using minimal physical testing , 2007 .

[18]  D. Floreano,et al.  Evolutionary Robotics: The Biology,Intelligence,and Technology , 2000 .

[19]  John R. Koza,et al.  Genetic programming - on the programming of computers by means of natural selection , 1993, Complex adaptive systems.

[20]  Lennart Ljung,et al.  System Identification: Theory for the User , 1987 .

[21]  Inman Harvey,et al.  Evolving visually guided robots , 1993 .

[22]  David J. Murray-Smith,et al.  Nonlinear model structure identification using genetic programming , 1998 .

[23]  H. Sebastian Seung,et al.  Query by committee , 1992, COLT '92.

[24]  Christopher H. Bryant,et al.  Functional genomic hypothesis generation and experimentation by a robot scientist , 2004, Nature.

[25]  H. Sebastian Seung,et al.  Learning to Walk in 20 Minutes , 2005 .