Áëìêááíììì Ëëaeaeàêçaeçíë Áaeaeêêååaeìä Ëíííêêêááaeì Ååìàççë
暂无分享,去创建一个
[1] Paul Tseng,et al. An Incremental Gradient(-Projection) Method with Momentum Term and Adaptive Stepsize Rule , 1998, SIAM J. Optim..
[2] Luo Zhi-quan,et al. Analysis of an approximate gradient projection method with applications to the backpropagation algorithm , 1994 .
[3] Zhi-Quan Luo,et al. On the Convergence of the LMS Algorithm with Adaptive Learning Rate for Linear Feedforward Networks , 1991, Neural Computation.
[4] John N. Tsitsiklis,et al. Distributed Asynchronous Deterministic and Stochastic Gradient Optimization Algorithms , 1984, 1984 American Control Conference.
[5] John N. Tsitsiklis,et al. Neuro-Dynamic Programming , 1996, Encyclopedia of Machine Learning.
[6] Dimitri P. Bertsekas,et al. A New Class of Incremental Gradient Methods for Least Squares Problems , 1997, SIAM J. Optim..
[7] Luigi Grippo,et al. A class of unconstrained minimization methods for neural network training , 1994 .
[8] M. Solodov,et al. Error Stability Properties of Generalized Gradient-Type Algorithms , 1998 .
[9] K. Kiwiel,et al. Parallel Subgradient Methods for Convex Optimization , 2001 .
[10] M. Caramanis,et al. Efficient Lagrangian relaxation algorithms for industry size job-shop scheduling problems , 1998 .
[11] X. Zhao,et al. Surrogate Gradient Algorithm for Lagrangian Relaxation , 1999 .
[12] John N. Tsitsiklis,et al. Parallel and distributed computation , 1989 .
[13] O. Mangasarian,et al. Serial and parallel backpropagation convergence via nonmonotone perturbed minimization , 1994 .