Multilayer neural networks: an experimental evaluation of on-line training methods

Artificial neural networks (ANN) are inspired by the structure of biological neural networks and their ability to integrate knowledge and learning. In ANN training, the objective is to minimize the error over the training set. The most popular method for training these networks is back propagation, a gradient descent technique. Other non-linear optimization methods such as conjugate directions set or conjugate gradient have also been used for this purpose. Recently, metaheuristics such as simulated annealing, genetic algorithms or tabu search have been also adapted to this context.There are situations in which the necessary training data are being generated in real time and, an extensive training is not possible. This "on-line" training arises in the context of optimizing a simulation. This paper presents extensive computational experiments to compare 12 "on-line" training methods over a collection of 45 functions from the literature within a short-term horizon. We propose a new method based, on the tabu search methodology, which can compete in quality with the best previous approaches.

[1]  G. P. McKeown,et al.  Optimization Software Class Libraries , 2002, Operations Research/Computer Science Interfaces Series.

[2]  Rafael Martí,et al.  Scatter Search: Diseño Básico y Estrategias avanzadas , 2002, Inteligencia Artif..

[3]  Timothy Masters,et al.  Neural, Novel & Hybrid Algorithms for Time Series Prediction , 1995 .

[4]  John A. Nelder,et al.  A Simplex Method for Function Minimization , 1965, Comput. J..

[5]  Ming S. Hung,et al.  A comparison of nonlinear optimization methods for supervised learning in multilayer feedforward neural networks , 1996 .

[6]  Randall S. Sexton,et al.  Optimization of neural networks: A comparative analysis of the genetic algorithm and simulated annealing , 1999, Eur. J. Oper. Res..

[7]  Carlos Kavka,et al.  Entrenamiento de Redes Neuronales , 2000 .

[8]  John C. Plummer,et al.  A Multistart Scatter Search Heuristic for Smooth NLP and MINLP Problems , 2005 .

[9]  Fred W. Glover,et al.  A Template for Scatter Search and Path Relinking , 1997, Artificial Evolution.

[10]  David L. Woodruff,et al.  Optimization software class libraries , 2002 .

[11]  William H. Press,et al.  Numerical recipes in C. The art of scientific computing , 1987 .

[12]  R. Brent Table errata: Algorithms for minimization without derivatives (Prentice-Hall, Englewood Cliffs, N. J., 1973) , 1975 .

[13]  Rafael Martí,et al.  Neural network prediction in a system for optimizing simulations , 2002 .

[14]  Elijah Polak,et al.  Computational methods in optimization , 1971 .

[15]  J. E. Glynn,et al.  Numerical Recipes: The Art of Scientific Computing , 1989 .

[16]  William H. Press,et al.  The Art of Scientific Computing Second Edition , 1998 .

[17]  Bahram Alidaee,et al.  Global optimization for artificial neural networks: A tabu search application , 1998, Eur. J. Oper. Res..