A Unification of Genetic Algorithms, Neural Networks an Fuzzy Logic: The GANNFL Approach

The GANNFL Approach uses a steady state Genetic Algorithm (GA) to build and train hybrid classifiers which are combinations of Neural Networks (NN's) and Fuzzy Logic (FL). This novel approach finds both the architecture, the types of the hidden units, the types of the output units, and all weights ! The designed modular and tight GA encoding together with the GA fitness function lets the GA develop high performance hybrid classifiers, which consist of NN parts and FL parts co-operating tightly within the same architecture. By analysing the behaviour of the GA it will be investigated whether there is evidence for preferring NN classifiers or FL classifiers. Further, the importance of the types of the hidden units is investigated. Parameter reduction is a very important issue according to the theory of the VC-dimension and Ockham's Razor. Hence, the GANNFL Approach also focuses on parameter reduction, which is achieved by automatically pruning unnecessary weights and units. The GANNFL Approach was tested on 5 well known classification problems: The artificial, noisy monks3 problem and 4 problems representing difficult real-world problems that contain missing, noisy, misclassified, and few data: The cancer, card, diabetes and glass problems. The results are compared to public available results found by other NN approaches, the sGANN approach (simple GA to train NN's), and the ssGAFL approach (steady state GA to train FL classifiers). In every case the GANNFL Approach found a better or comparable result than the best other approach!

[1]  J. C. Weill How hard is the correct coding of an easy endgame , 1994 .

[2]  Sebastian Thrun,et al.  Learning to Play the Game of Chess , 1994, NIPS.

[3]  Shaul Markovitch,et al.  Learning Models of Opponent's Strategy Game Playing , 1993 .

[4]  E. Clothiaux,et al.  Neural Networks and Their Applications , 1994 .

[5]  Patrick van der Smagt,et al.  Introduction to neural networks , 1995, The Lancet.

[6]  M. F.,et al.  Bibliography , 1985, Experimental Gerontology.

[7]  Richard K. Belew,et al.  A competitive approach to game learning , 1996, COLT '96.

[8]  Daniel Olson,et al.  Learning to Play Games From Experience: An Application of Artificial Neural Networks and Temporal Di , 1993 .

[9]  Arthur L. Samuel,et al.  Some Studies in Machine Learning Using the Game of Checkers , 1967, IBM J. Res. Dev..

[10]  Tom Elliott Fawcett Feature discovery for problem solving systems , 1993 .

[11]  B. Pell A STRATEGIC METAGAME PLAYER FOR GENERAL CHESS‐LIKE GAMES , 1994, Comput. Intell..

[12]  Paul E. Utgoff,et al.  Automatic Feature Generation for Problem Solving Systems , 1992, ML.

[13]  Barry L. Kalman,et al.  TRAINREC: A System for Training Feedforward & Simple Recurrent Networks Efficiently and Correctly , 1993 .

[14]  Martin Schmidt,et al.  Using GA to Train NN Using Sharing and Pruning , 1995, SCAI.

[15]  Richard K. Belew,et al.  Methods for Competitive Co-Evolution: Finding Opponents Worth Beating , 1995, ICGA.

[16]  Emile Fiesler,et al.  The Interchangeability of Learning Rate and Gain in Backpropagation Neural Networks , 1996, Neural Computation.

[17]  Charles L. Isbell,et al.  Explorations of the practical issues of learning prediction-control tasks using temporal difference learning methods , 1992 .

[18]  Robert Levinson,et al.  GENERAL GAME‐PLAYING AND REINFORCEMENT LEARNING , 1995, Comput. Intell..

[19]  Peer Sommerlund,et al.  Artificial Neural Nets Applied to Strategic Games , 1996 .

[20]  J. Schaeffer,et al.  Solving the Game of Checkers , 1996 .

[21]  M. A. Wiering TD Learning of Game Evaluation Functions with Hierarchies Neural Architectures , 1995 .

[22]  Michael Gherrity,et al.  A game-learning machine , 1993 .

[23]  Jonathan Schaeffer,et al.  CHINOOK: The World Man-Machine Checkers Champion , 1996, AI Mag..