Convergence Rate of Incremental Gradient and Incremental Newton Methods
暂无分享,去创建一个
Asuman E. Ozdaglar | Pablo A. Parrilo | Mert Gürbüzbalaban | A. Ozdaglar | P. Parrilo | M. Gürbüzbalaban
[1] Angelia Nedic,et al. Convergence Rate of Distributed Averaging Dynamics and Optimization in Networks , 2015, Found. Trends Syst. Control..
[2] Alexander Shapiro,et al. Stochastic Approximation approach to Stochastic Programming , 2013 .
[3] Alfred O. Hero,et al. A Convergent Incremental Gradient Method with a Constant Step Size , 2007, SIAM J. Optim..
[4] Zhi-Quan Luo,et al. On the Convergence of the LMS Algorithm with Adaptive Learning Rate for Linear Feedforward Networks , 1991, Neural Computation.
[5] K. Chung. On a Stochastic Approximation Method , 1954 .
[6] Luigi Grippo,et al. A class of unconstrained minimization methods for neural network training , 1994 .
[7] O. Mangasarian,et al. Serial and parallel backpropagation convergence via nonmonotone perturbed minimization , 1994 .
[8] Elad Hazan,et al. Logarithmic regret algorithms for online convex optimization , 2006, Machine Learning.
[9] Christopher Ré,et al. Parallel stochastic gradient algorithms for large-scale matrix completion , 2013, Mathematical Programming Computation.
[10] Stephen P. Boyd,et al. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..
[11] E. Polak. On the mathematical foundations of nondifferentiable optimization in engineering design , 1987 .
[12] Léon Bottou,et al. On-line learning for very large data sets , 2005 .
[13] Asuman E. Ozdaglar,et al. A globally convergent incremental Newton method , 2014, Math. Program..
[14] Alexei A. Gaivoronski,et al. Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods. Part 1 , 1994 .
[15] Boris Polyak,et al. Acceleration of stochastic approximation by averaging , 1992 .