Neural Network Learning With Global Heuristic Search

A novel hybrid global optimization (GO) algorithm applied for feedforward neural networks (NNs) supervised learning is investigated. The network weights are determined by minimizing the traditional mean square error function. The optimization technique, called LPtau NM, combines a novel global heuristic search based on LPtau low-discrepancy sequences of points, and a simplex local search. The proposed method is initially tested on multimodal mathematical functions and subsequently applied for training moderate size NNs for solving popular benchmark problems. Finally, the results are analyzed, discussed, and compared with such as from backpropagation (BP) (Levenberg-Marquardt) and differential evolution methods

[1]  John A. Nelder,et al.  A Simplex Method for Function Minimization , 1965, Comput. J..

[2]  H. Niederreiter Low-discrepancy and low-dispersion sequences , 1988 .

[3]  Patrick Siarry,et al.  Tabu Search applied to global optimization , 2000, Eur. J. Oper. Res..

[4]  Tung-Kuan Liu,et al.  Tuning the structure and parameters of a neural network by using hybrid Taguchi-genetic algorithm , 2006, IEEE Trans. Neural Networks.

[5]  Harald Niederreiter,et al.  Implementation and tests of low-discrepancy sequences , 1992, TOMC.

[6]  Roberto Battiti,et al.  Training neural nets with the reactive tabu search , 1995, IEEE Trans. Neural Networks.

[7]  Stefano Fanelli,et al.  A new class of quasi-Newtonian methods for optimal learning in MLP-networks , 2003, IEEE Trans. Neural Networks.

[8]  J. Ross Quinlan,et al.  Combining Instance-Based and Model-Based Learning , 1993, ICML.

[9]  Ping Shum,et al.  A Staged Continuous Tabu Search Algorithm for the Global Optimization and its Applications to the Design of Fiber Bragg Gratings , 2005, Comput. Optim. Appl..

[10]  Patrick Siarry,et al.  Genetic and Nelder-Mead algorithms hybridized for a more accurate global optimization of continuous multiminima functions , 2003, Eur. J. Oper. Res..

[11]  José Neves,et al.  Evolutionary Neural Network Learning , 2003, EPIA.

[12]  Heekuck Oh,et al.  Neural Networks for Pattern Recognition , 1993, Adv. Comput..

[13]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[14]  Marco Gori,et al.  Optimal learning in artificial neural networks: A review of theoretical results , 1996, Neurocomputing.

[15]  Chee Kheong Siew,et al.  Real-time learning capability of neural networks , 2006, IEEE Trans. Neural Networks.

[16]  H. Tamura,et al.  An improved backpropagation algorithm to avoid the local minima problem , 2004, Neurocomputing.

[17]  I. Sobol On the Systematic Search in a Hypercube , 1979 .

[18]  Harald Niederreiter,et al.  Random number generation and Quasi-Monte Carlo methods , 1992, CBMS-NSF regional conference series in applied mathematics.