An investigation of topological choices in FS-NEAT and FD-NEAT on XOR-based problems of increased complexity

Feature Selective Neuroevolution of Augmenting Topologies (FS-NEAT) and Feature De-selective Neuroevolution of Augmenting Topologies (FD-NEAT) are two well-known methods for optimizing the topology and the weights of Artificial Neural Networks (ANNs) while simultaneously performing feature selection. Literature has shown that starting the evolution with ANNs of one hidden layer can affect FD-NEAT's and FS-NEAT's performances. However, no study exists that investigates the effects of changing the networks' initial connectivity. In this paper we investigate how the choice of the number of initially connected inputs affects the performance of FD-NEAT and FS-NEAT in terms of accuracy, number of generations required for convergence, ability of performing feature selection and size of the evolved networks. For this purpose we employ artificial datasets of increasing complexity based on the exclusive-or (XOR) problem with irrelevant features. The different initial topological settings are compared using Kruskal-Wallis hypothesis tests with Bonferroni correction (p<0.01), while FD-NEAT and FS-NEAT are compared using Wilcoxon rank sum hypothesis tests (p<0.01). The results show that the initial connectivity setting does not affect the performance of FD-NEAT and FS-NEAT.

[1]  L. Darrell Whitley,et al.  Genetic algorithms and neural networks: optimizing connections and connectivity , 1990, Parallel Comput..

[2]  Jan Cornelis,et al.  Analysis of a feature-deselective neuroevolution classifier (FD-NEAT) in a computer-aided lung nodule detection system for CT images , 2012, GECCO '12.

[3]  Risto Miikkulainen,et al.  Evolving Neural Networks through Augmenting Topologies , 2002, Evolutionary Computation.

[4]  Byoung-Tak Zhang,et al.  Evolving Optimal Neural Networks Using Genetic Algorithms with Occam's Razor , 1993, Complex Syst..

[5]  Risto Miikkulainen,et al.  Efficient Reinforcement Learning through Symbiotic Evolution , 2004 .

[6]  Peter J. Angeline,et al.  An evolutionary algorithm that constructs recurrent neural networks , 1994, IEEE Trans. Neural Networks.

[7]  Risto Miikkulainen,et al.  2-D Pole Balancing with Recurrent Evolutionary Networks , 1998 .

[8]  Bart Jansen,et al.  A comparison between FS-NEAT and FD-NEAT and an investigation of different initial topologies for a classification task with irrelevant features , 2016, 2016 IEEE Symposium Series on Computational Intelligence (SSCI).

[9]  Rudi Deklerck,et al.  Automated feature selection in neuroevolution , 2009, Evol. Intell..

[10]  Risto Miikkulainen,et al.  Incremental Evolution of Complex General Behavior , 1997, Adapt. Behav..

[11]  Inman Harvey,et al.  Incremental evolution of neural network architectures for adaptive behavior , 1993, ESANN.

[12]  Lawrence Davis,et al.  Training Feedforward Neural Networks Using Genetic Algorithms , 1989, IJCAI.

[13]  Risto Miikkulainen,et al.  Automatic feature selection in neuroevolution , 2005, GECCO '05.

[14]  Derek James,et al.  A Comparative Analysis of Simplification and Complexification in the Evolution of Neural Network Topologies , 2004 .