Comparing Hybrid Systems to Design and Optimize Artificial Neural Networks

In this paper we conduct a comparative study between hybrid methods to optimize multilayer perceptrons: a model that optimizes the architecture and initial weights of multilayer perceptrons; a parallel approach to optimize the architecture and initial weights of multilayer perceptrons; a method that searches for the parameters of the training algorithm, and an approach for cooperative co-evolutionary optimization of multilayer perceptrons.

[1]  Ethem Alpaydin,et al.  GAL: Networks That Grow When They Learn and Shrink When They Forget , 1994, Int. J. Pattern Recognit. Artif. Intell..

[2]  Juan Julián Merelo Guervós,et al.  Parallel Problem Solving from Nature — PPSN VII , 2002, Lecture Notes in Computer Science.

[3]  Martin A. Riedmiller,et al.  A direct adaptive method for faster backpropagation learning: the RPROP algorithm , 1993, IEEE International Conference on Neural Networks.

[4]  Qiangfu Zhao Co-evolutionary learning of neural networks , 1998, J. Intell. Fuzzy Syst..

[5]  Vassilios Petridis,et al.  A hybrid genetic algorithm for training neural networks , 1992 .

[6]  Juan Julián Merelo Guervós,et al.  SA-Prop: Optimization of Multilayer Perceptron Parameters Using Simulated Annealing , 1999, IWANN.

[7]  Juan Julián Merelo Guervós,et al.  Optimization of a Competitive Learning Neural Network by Genetic Algorithms , 1993, IWANN.

[8]  Lutz Prechelt,et al.  Automatic early stopping using cross validation: quantifying the criteria , 1998, Neural Networks.

[9]  Alan F. Murray,et al.  IEEE International Conference on Neural Networks , 1997 .

[10]  Scott E. Fahlman,et al.  An empirical study of learning speed in back-propagation networks , 1988 .

[11]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[12]  Juan Julián Merelo Guervós,et al.  G-Prop: Global optimization of multilayer perceptrons using GAs , 2000, Neurocomputing.

[13]  K. Saito,et al.  Cooperative co-evolutionary algorithm-how to evaluate a module? , 2000, 2000 IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks. Proceedings of the First IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks (Cat. No.00.

[14]  C. Jutten,et al.  Gal: Networks That Grow When They Learn and Shrink When They Forget , 1991 .

[15]  Phil Husbands,et al.  Distributed Coevolutionary Genetic Algorithms for Multi-Criteria and Multi-Constraint Optimisation , 1994, Evolutionary Computing, AISB Workshop.

[16]  David G. Stork,et al.  Evolution and Learning in Neural Networks: The Number and Distribution of Learning Trials Affect the Rate of Evolution , 1990, NIPS 1990.

[17]  Jan Paredis,et al.  Coevolutionary Computation , 1995, Artificial Life.

[18]  Esther Levin,et al.  A statistical approach to learning and generalization in layered neural networks , 1989, COLT '89.

[19]  Risto Miikkulainen,et al.  Forming Neural Networks Through Efficient and Adaptive Coevolution , 1997, Evolutionary Computation.

[20]  Alberto Prieto,et al.  New Trends in Neural Computation , 1993 .

[21]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[22]  Ignacio Rojas,et al.  Statistical analysis of the parameters of a neuro-genetic algorithm , 2002, IEEE Trans. Neural Networks.

[23]  Xin Yao,et al.  Evolving artificial neural networks , 1999, Proc. IEEE.

[24]  Lutz Prechelt,et al.  PROBEN 1 - a set of benchmarks and benchmarking rules for neural network training algorithms , 1994 .

[25]  Reinhold Huber,et al.  Evolving Topologies of Artificial Neural Networks Adapted to Image Processing Tasks , 1996 .

[26]  César Hervás-Martínez,et al.  COVNET: a cooperative coevolutionary model for evolving artificial neural networks , 2003, IEEE Trans. Neural Networks.

[27]  José Mira,et al.  Foundations and Tools for Neural Modeling , 1999, Lecture Notes in Computer Science.

[28]  Juan Julián Merelo Guervós,et al.  Optimisation of Multilayer Perceptrons Using a Distributed Evolutionary Algorithm with SOAP , 2002, PPSN.

[29]  Michael Conrad,et al.  Combining evolution with credit apportionment: A new learning algorithm for neural nets , 1994, Neural Networks.

[30]  Vijay K. Samalam,et al.  Exhaustive Learning , 1990, Neural Computation.

[31]  Kenneth A. De Jong,et al.  Cooperative Coevolution: An Architecture for Evolving Coadapted Subcomponents , 2000, Evolutionary Computation.