Evolving modular networks with genetic algorithms: application to nonlinear time series

A key problem of modular neural networks is finding the optimal aggregation of the different subtasks (or modules) of the problem at hand. Functional networks provide a partial solution to this problem, since the inter-module topology is obtained from domain knowledge (functional relationships and symmetries). However, the learning process may be too restrictive in some situations, since the resulting modules (functional units) are assumed to be linear combinations of selected families of functions. In this paper, we present a non-parametric learning approach for functional networks using feedforward neural networks for approximating the functional modules of the resulting architecture; we also introduce a genetic algorithm for finding the optimal intra-module topology (the appropriate balance of neurons for the different modules according to the complexity of their respective tasks). Some benchmark examples from nonlinear time-series prediction are used to illustrate the performance of the algorithm for finding optimal modular network architectures for specific problems.

[1]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..

[2]  J. Aczél,et al.  Lectures on Functional Equations and Their Applications , 1968 .

[3]  Jorma Rissanen,et al.  Stochastic Complexity in Statistical Inquiry , 1989, World Scientific Series in Computer Science.

[4]  Juan Julián Merelo Guervós,et al.  Evolving Multilayer Perceptrons , 2000, Neural Processing Letters.

[5]  D. E. Goldberg,et al.  Genetic Algorithms in Search , 1989 .

[6]  Enrique F. Castillo,et al.  A Minimax Method for Learning Functional Networks , 2000, Neural Processing Letters.

[7]  Bart L. M. Happel,et al.  Designing Modular Network Architectures Using a Genetic Algorithm , 1992 .

[8]  Lakhmi C. Jain,et al.  Automatic generation of a neural network architecture using evolutionary computation , 1995, Proceedings Electronic Technology Directions to the Year 2000.

[9]  Dana H. Ballard,et al.  Modular Learning in Neural Networks , 1987, AAAI.

[10]  José Manuel Gutiérrez,et al.  Optimal modular feedforward neural networks based on functional networks , 2001 .

[11]  Enrique Castillo,et al.  Functional Networks , 1998, Neural Processing Letters.

[12]  Hal S. Stern,et al.  Neural networks in applied statistics , 1996 .

[13]  Jean-Michel Renders,et al.  Hybridizing genetic algorithms with hill-climbing methods for global optimization: two possible ways , 1994, Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence.

[14]  Lakhmi C. Jain,et al.  Neural Network Training Using Genetic Algorithms , 1996 .

[15]  Bart L. M. Happel,et al.  Design and evolution of modular neural network architectures , 1994, Neural Networks.

[16]  R. R. Whitehead,et al.  A chaotic mapping that displays its own homoclinic structure , 1984 .

[17]  David E. Goldberg,et al.  Genetic Algorithms in Search Optimization and Machine Learning , 1988 .

[18]  Anders Krogh,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[19]  Zbigniew Michalewicz,et al.  Genetic Algorithms + Data Structures = Evolution Programs , 1996, Springer Berlin Heidelberg.

[20]  Stefan Wermter,et al.  A Novel Modular Neural Architecture for Rule-Based and Similarity-Based Reasoning , 1998, Hybrid Neural Systems.

[21]  Geoffrey E. Hinton,et al.  Adaptive Mixtures of Local Experts , 1991, Neural Computation.

[22]  R. Palmer,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.