On the potential of evolution strategies for neural network weight optimization

Artificial neural networks typically use backpropagation methods for the optimization of weights. In this paper, we aim at investigating the potential of applying the so-called evolutionary strategies (ESs) on the weight optimization task. Three commonly used ESs are tested on a multilayer feedforward network, trained on the well-known MNIST data set. The performance is compared to the Adam algorithm, in which the result shows that although the (1 + 1)-ES exhibits a higher convergence rate in the early stage of the training, it quickly gets stagnated and thus Adam still outperforms ESs at the final stage of the training.