Evolutionary methods for training neural networks

Training neural networks by the implementation of a gradient-based optimization algorithm (e.g., back-propagation) often leads to locally optimal solutions which may be far removed from the global optimum. Evolutionary optimization methods offer a procedure to stochastically search for suitable weights and bias terms given a specific network topology. The topics discussed are evolutionary programming; genetic algorithms; evolutionary function optimization experiments; background to classification problems and experimental results with evolutionary training.<<ETX>>