A new learning scheme for dynamic self-adaptation of learning-relevant parameters

Backpropagation being a gradient-descent method, the learning rate has to be given a relatively small value so that only a very low convergence rate can be achieved. This article shows that relevant learning parameters are dynamically controllable by using an evolutionary strategy based on mutation and selection. The adjustments result in significant improvements of the convergence speed.