An evolutionary artificial neural network training and design methodology is presented, aimed at obtaining optimum or quasi-optimum synchronous recurrent neural networks capable of processing sequential inputs. We show that, through the use of this method and working with floating point and integer valued chromosomes, it is possible to achieve optimum results, considering very small populations and few generations. In order to implement this methodology, we have developed GENIAL, a genetic algorithm development environment which is specifically designed for solving this type of problem. It offers ways of testing adequate fitness functions and many tools for improving results. Finally, we comment on the sequential introduction of different constraints in genetic algorithms, presenting a classical example where several design requirements are met simultaneously and which demonstrates the power of this method.<<ETX>>
[1]
James F. Frenzel,et al.
Training product unit neural networks with genetic algorithms
,
1993,
IEEE Expert.
[2]
Richard P. Lippmann,et al.
An introduction to computing with neural nets
,
1987
.
[3]
L. Darrell Whitley,et al.
Genetic algorithms and neural networks: optimizing connections and connectivity
,
1990,
Parallel Comput..
[4]
Xin Yao,et al.
A review of evolutionary artificial neural networks
,
1993,
Int. J. Intell. Syst..
[5]
Geoffrey E. Hinton.
Connectionist Learning Procedures
,
1989,
Artif. Intell..
[6]
Stefan Bornholdt,et al.
General asymmetric neural networks and structure design by genetic algorithms: a learning rule for temporal patterns
,
1992,
Proceedings of IEEE Systems Man and Cybernetics Conference - SMC.