Generational versus steady-state evolution for optimizing neural network learning

The use of simulated evolution is now a commonplace technique for optimizing the learning abilities of neural network systems. Neural network details such as architecture, initial weight distributions, gradient descent learning rates, and regularization parameters, have all been successfully evolved to result in improved performance. The author investigates which evolutionary approaches work best in this field. In particular, he compares the traditional generational approach to a more biologically realistic steady-state approach.