Multilevel Genetic Algorithm for the Complete Development of ANN

The utilization of Genetic Algorithms (GA) in the development of Artificial Neural Networks is a very active area of investigation. The works that are being carried out at present not only focus on the adjustment of the weights of the connections, but also they tend, more and more, to the development of systems which realize tasks of design and training, in parallel. To cover these necessities and, as an open platform for new developments, in this article it is shown a multilevel GA architecture which establishes a difference between the design and the training tasks. In this system, the design tasks are performed in a parallel way, by using different machines. Each design process has associated a training process as an evaluation function. Every design GA interchanges solutions in such a way that they help one each other towards the best solution working in a cooperative way during the simulation.

[1]  I. Jolliffe Principal Component Analysis , 2002 .

[2]  Donald F. Specht,et al.  Probabilistic neural networks , 1990, Neural Networks.

[3]  Gonzalo Joya,et al.  New learning strategies from the microscopic level of an artificial neural network , 1993 .

[4]  Julian Dorado,et al.  Methodology for the construction of more efficient artificial neural networks by means of studying and selecting the training set , 1996, ICNN.

[5]  Donald E. Waagen,et al.  Evolving recurrent perceptrons for time-series modeling , 1994, IEEE Trans. Neural Networks.

[6]  Lawrence Davis,et al.  Training Feedforward Neural Networks Using Genetic Algorithms , 1989, IJCAI.

[7]  F ROSENBLATT,et al.  The perceptron: a probabilistic model for information storage and organization in the brain. , 1958, Psychological review.

[8]  Sung Yang Bang,et al.  An improved time series prediction by applying the layer-by-layer learning method to FIR neural networks , 1997, Neural Networks.

[9]  Ronald J. Williams,et al.  A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.

[10]  Ehud D. Karnin,et al.  A simple procedure for pruning back-propagation trained neural networks , 1990, IEEE Trans. Neural Networks.

[11]  Athanasios Kehagias,et al.  A Recurrent Network Implementation of Time Series Classification , 1996, Neural Computation.

[12]  W. Pitts,et al.  A Logical Calculus of the Ideas Immanent in Nervous Activity (1943) , 2021, Ideas That Created the Future.