On-line neuroevolution applied to The Open Racing Car Simulator

The application of on-line learning techniques to modern computer games is a promising research direction. In fact, they can be used to improve the game experience and to achieve a true adaptive game AI. So far, several works proved that neuroevolution techniques can be successfully applied to modern computer games but they are usually restricted to offline learning scenarios. In on-line learning problems the main challenge is to find a good trade-off between the exploration, i.e., the search for better solutions, and the exploitation of the best solution discovered so far. In this paper we propose an on-line neuroevolution approach to evolve non-player characters in The Open Car Racing Simulator (TORCS), a state-of-the-art open source car racing simulator. We tested our approach on two on-line learning problems: (i) on-line evolution of a fast controller from scratch and (ii) optimization of an existing controller for a new track. Our results show that on-line neuroevolution can effectively improve the performance achieved during the learning process.

[1]  Risto Miikkulainen,et al.  Evolving Neural Networks through Augmenting Topologies , 2002, Evolutionary Computation.

[2]  Pieter Spronck,et al.  Adaptive game AI , 2005 .

[3]  Andrew G. Barto,et al.  Reinforcement learning , 1998 .

[4]  Colin Fyfe,et al.  Improving Artificial Intelligence In a Motocross Game , 2006, 2006 IEEE Symposium on Computational Intelligence and Games.

[5]  David E. Goldberg,et al.  Genetic Algorithms with Sharing for Multimodalfunction Optimization , 1987, ICGA.

[6]  Julian Togelius,et al.  Arms Races and Car Races , 2006, PPSN.

[7]  Richard S. Sutton,et al.  Introduction to Reinforcement Learning , 1998 .

[8]  Peter J. Bentley,et al.  Optimising the Performance of a Formula One Car Using a Genetic Algorithm , 2004, PPSN.

[9]  Stewart W. Wilson,et al.  Learning Classifier Systems, From Foundations to Applications , 2000 .

[10]  Peter Stagge,et al.  Averaging Efficiently in the Presence of Noise , 1998, PPSN.

[11]  Richard S. Sutton,et al.  Dimensions of Reinforcement Learning , 1998 .

[12]  Julian Togelius,et al.  Evolving controllers for simulated car racing , 2005, 2005 IEEE Congress on Evolutionary Computation.

[13]  Risto Miikkulainen,et al.  Neuroevolution of an automobile crash warning system , 2005, GECCO '05.

[14]  Oliver Kramer,et al.  Evolution of Human-Competitive Agents in Modern Computer Games , 2006, 2006 IEEE International Conference on Evolutionary Computation.

[15]  Sandor Markon,et al.  Threshold selection, hypothesis tests, and DOE methods , 2002, Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No.02TH8600).

[16]  Julian Togelius,et al.  The WCCI 2008 simulated car racing competition , 2008, 2008 IEEE Symposium On Computational Intelligence and Games.

[17]  Shimon Whiteson,et al.  On-line evolutionary computation for reinforcement learning in stochastic domains , 2006, GECCO.

[18]  Julian Togelius,et al.  Evolving robust and specialized car racing skills , 2006, 2006 IEEE International Conference on Evolutionary Computation.

[19]  Larry D. Pyeatt,et al.  Learning to Race: Experiments with a Simulated Race Car , 1998, FLAIRS.