Learning in neural networks by using tangent planes to constraint surfaces
暂无分享,去创建一个
Abstract The principal disadvantage of the back propagation gradient descent learning algorithm for multilayer feedforward neural networks is its relatively slow rate of convergence. An alternative method, adjusting weights by moving to the tangent planes to constraint surfaces, is shown to give significantly faster convergence whilst preserving the system of back propagating errors through the network.
[1] Norio Baba,et al. A new approach for finding the global minimum of error function of neural networks , 1989, Neural Networks.
[2] Robert A. Jacobs,et al. Increased rates of convergence through learning rate adaptation , 1987, Neural Networks.
[3] Roger J.-B. Wets,et al. Minimization by Random Search Techniques , 1981, Math. Oper. Res..