A new learning scheme for dynamic self-adaptation of learning-relevant parameters
暂无分享,去创建一个
Backpropagation being a gradient-descent method, the learning rate has to be given a relatively small value so that only a very low convergence rate can be achieved. This article shows that relevant learning parameters are dynamically controllable by using an evolutionary strategy based on mutation and selection. The adjustments result in significant improvements of the convergence speed.
[1] D. R. Hush,et al. Improving the learning rate of back-propagation with the gradient reuse algorithm , 1988, IEEE 1988 International Conference on Neural Networks.
[2] Luís B. Almeida,et al. Speeding up Backpropagation , 1990 .
[3] Yann LeCun,et al. Improving the convergence of back-propagation learning with second-order methods , 1989 .
[4] Robert A. Jacobs,et al. Increased rates of convergence through learning rate adaptation , 1987, Neural Networks.