Evolving Transferable Artificial Neural Networks for Gameplay Tasks via NEAT with Phased Searching

NeuroEvolution of Augmenting Topologies (NEAT) has been successfully applied to intelligent gameplay. To further improve its effectiveness, a key technique is to reuse the knowledge learned from source gameplay tasks to boost performance on target gameplay tasks. We consider this as a Transfer Learning (TL) problem. However, Artificial Neural Networks (ANNs) evolved by NEAT are usually unnecessarily complicated, which may affect their transferability. To address this issue, we will investigate in this paper the capability of Phased Searching (PS) methods for controlling ANNs’ complexity while maintaining their effectiveness. By doing so, we can obtain more transferable ANNs. Furthermore, we will propose a new Power-Law Ranking Probability based PS (PLPS) method to more effectively control the randomness during the simplification phase. Several recent PS methods as well as our PLPS have been evaluated on four carefully-designed TL experiments. Results show clearly that NEAT can evolve more transferable and structurally simple ANNs with the help of PS methods, in particular PLPS.

[1]  Ashwin Ram,et al.  Transfer Learning in Real-Time Strategy Games Using Hybrid CBR/RL , 2007, IJCAI.

[2]  Risto Miikkulainen,et al.  Discovering Complex Othello Strategies through Evolutionary Neural Networks , 1995, Connect. Sci..

[3]  Peter Stone,et al.  Behavior transfer for value-function-based reinforcement learning , 2005, AAMAS '05.

[4]  Risto Miikkulainen,et al.  HyperNEAT-GGP: a hyperNEAT-based atari general game player , 2012, GECCO '12.

[5]  S. Kotsiantis,et al.  Discretization Techniques: A recent survey , 2006 .

[6]  Julian Togelius,et al.  The Mario AI Benchmark and Competitions , 2012, IEEE Transactions on Computational Intelligence and AI in Games.

[7]  Risto Miikkulainen,et al.  Efficient Reinforcement Learning Through Evolving Neural Network Topologies , 2002, GECCO.

[8]  Shimon Whiteson,et al.  Temporal Difference and Policy Search Methods for Reinforcement Learning: An Empirical Comparison , 2007, AAAI.

[9]  Peter Stone,et al.  Graph-Based Domain Mapping for Transfer Learning in General Games , 2007, ECML.

[10]  Risto Miikkulainen,et al.  Solving Multiple Isolated, Interleaved, and Blended Tasks through Modular Neuroevolution , 2016, Evolutionary Computation.

[11]  Bin Zheng,et al.  Cancer Informatics , 2022 .

[12]  Risto Miikkulainen,et al.  A Neuroevolution Approach to General Atari Game Playing , 2014, IEEE Transactions on Computational Intelligence and AI in Games.

[13]  Daniele Loiacono,et al.  Learning to Drive in the Open Racing Car Simulator Using Online Neuroevolution , 2010, IEEE Transactions on Computational Intelligence and AI in Games.

[14]  Kenneth O. Stanley,et al.  Autonomous Evolution of Topographic Regularities in Artificial Neural Networks , 2010, Neural Computation.

[15]  Peter Stone,et al.  Transfer Learning for Reinforcement Learning Domains: A Survey , 2009, J. Mach. Learn. Res..

[16]  Shimon Whiteson,et al.  Transfer via inter-task mappings in policy search reinforcement learning , 2007, AAMAS '07.

[17]  Derek James,et al.  A Comparative Analysis of Simplification and Complexification in the Evolution of Neural Network Topologies , 2004 .

[18]  Jacek Mandziuk,et al.  Knowledge-Free and Learning-Based Methods in Intelligent Game Playing , 2010, Studies in Computational Intelligence.

[19]  Risto Miikkulainen,et al.  Evolving Neural Networks through Augmenting Topologies , 2002, Evolutionary Computation.