暂无分享,去创建一个
Tianbao Yang | Yi Xu | Qihang Lin | Tianbao Yang | Yi Xu | Qihang Lin
[1] Ambuj Tewari,et al. Composite objective mirror descent , 2010, COLT 2010.
[2] Tianbao Yang,et al. RSG: Beating SGD without Smoothness and/or Strong Convexity , 2015 .
[3] Mark W. Schmidt,et al. A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets , 2012, NIPS.
[4] Saeed Ghadimi,et al. Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms , 2013, SIAM J. Optim..
[5] Francis Bach,et al. SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives , 2014, NIPS.
[6] Ambuj Tewari,et al. On the Generalization Ability of Online Strongly Convex Programming Algorithms , 2008, NIPS.
[7] Eric R. Ziegel,et al. The Elements of Statistical Learning , 2003, Technometrics.
[8] Saeed Ghadimi,et al. Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework , 2012, SIAM J. Optim..
[9] Lin Xiao,et al. A Proximal Stochastic Gradient Method with Progressive Variance Reduction , 2014, SIAM J. Optim..
[10] Tong Zhang,et al. Accelerating Stochastic Gradient Descent using Predictive Variance Reduction , 2013, NIPS.
[11] R. Rockafellar. Monotone Operators and the Proximal Point Algorithm , 1976 .
[12] Lin Xiao,et al. Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization , 2009, J. Mach. Learn. Res..
[13] Rong Jin,et al. Linear Convergence with Condition Number Independent Access of Full Gradients , 2013, NIPS.
[14] Tianbao Yang,et al. An efficient primal dual prox method for non-smooth optimization , 2014, Machine Learning.
[15] Guoyin Li,et al. Global error bounds for piecewise convex polynomials , 2013, Math. Program..
[16] Elad Hazan,et al. An optimal algorithm for stochastic strongly-convex optimization , 2010, 1006.2425.
[17] Peter L. Bartlett,et al. Classification with a Reject Option using a Hinge Loss , 2008, J. Mach. Learn. Res..
[18] Bruce W. Suter,et al. From error bounds to the complexity of first-order descent methods for convex functions , 2015, Math. Program..
[19] Lorenzo Rosasco,et al. Are Loss Functions All the Same? , 2004, Neural Computation.
[20] Vladimir Vapnik,et al. Statistical learning theory , 1998 .
[21] J. Renegar. Efficient First-Order Methods for Linear Programming and Semidefinite Programming , 2014, 1409.5832.
[22] 丸山 徹. Convex Analysisの二,三の進展について , 1977 .
[23] Eric Moulines,et al. Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n) , 2013, NIPS.