Variable Hidden Layer Sizing in Elman Recurrent Neuro-Evolution

The relationship between the size of the hidden layer in a neural network and performance in a particular domain is currently an open research issue. Often, the number of neurons in the hidden layer is chosen empirically and subsequently fixed for the training of the network. Fixing the size of the hidden layer limits an inherent strength of neural networks—the ability to generalize experiences from one situation to another, to adapt to new situations, and to overcome the “brittleness” often associated with traditional artificial intelligence techniques. This paper proposes an evolutionary algorithm to search for network sizes along with weights and connections between neurons.This research builds upon the neuro-evolution tool SANE, developed by David Moriarty. SANE evolves neurons and networks simultaneously, and is modified in this work in several ways, including varying the hidden layer size, and evolving Elman recurrent neural networks for non-Markovian tasks. These modifications allow the evolution of better performing and more consistent networks, and do so more efficiently and faster.SANE, modified with variable network sizing, learns to play modified casino blackjack and develops a successful card counting strategy. The contributions of this research are up to 8.3% performance increases over fixed hidden layer size models while reducing hidden layer processing time by almost 10%, and a faster, more autonomous approach to the scaling of neuro-evolutionary techniques to solving larger and more difficult problems.

[1]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[2]  The World's Greatest. , 1953, California medicine.

[3]  L. Humble,et al.  The World's Greatest Blackjack Book , 1980 .

[4]  Brad Fullmer and Risto Miikkulainen Using Marker-Based Genetic Encoding Of Neural Networks To Evolve Finite-State Behaviour , 1991 .

[5]  Risto Miikkulainen,et al.  Solving Non-Markovian Control Tasks with Neuro-Evolution , 1999, IJCAI.

[6]  Michael J. Jones Using Recurrent Networks for Dimensionality Reduction , 1992 .

[7]  David Haussler,et al.  What Size Net Gives Valid Generalization? , 1989, Neural Computation.

[8]  Risto Miikkulainen,et al.  Discovering Complex Othello Strategies through Evolutionary Neural Networks , 1995, Connect. Sci..

[9]  Jeffrey L. Elman,et al.  Finding Structure in Time , 1990, Cogn. Sci..

[10]  E. Feigenbaum,et al.  Computers and Thought , 1963 .

[11]  Mitchell A. Potter,et al.  EVOLVING NEURAL NETWORKS WITH COLLABORATIVE SPECIES , 2006 .

[12]  L. Darrell Whitley,et al.  Delta Coding: An Iterative Search Strategy for Genetic Algorithms , 1991, ICGA.

[13]  David E. Moriarty,et al.  Symbiotic Evolution of Neural Networks in Sequential Decision Tasks , 1997 .

[14]  Risto Miikkulainen,et al.  Incremental Evolution of Complex General Behavior , 1997, Adapt. Behav..

[15]  Risto Miikkulainen,et al.  Evolving Neural Networks to Play Go , 2004, Applied Intelligence.

[16]  Marvin Minsky,et al.  Steps toward Artificial Intelligence , 1995, Proceedings of the IRE.