Training L1-Regularized Models with Orthant-Wise Passive Descent Algorithms
暂无分享,去创建一个
[1] S. V. N. Vishwanathan,et al. A Quasi-Newton Approach to Nonsmooth Convex Optimization Problems in Machine Learning , 2008, J. Mach. Learn. Res..
[2] Francis Bach,et al. SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives , 2014, NIPS.
[3] Lin Xiao,et al. A Proximal Stochastic Gradient Method with Progressive Variance Reduction , 2014, SIAM J. Optim..
[4] Michael I. Jordan,et al. A Linearly-Convergent Stochastic L-BFGS Algorithm , 2015, AISTATS.
[5] Chih-Jen Lin,et al. LIBSVM: A library for support vector machines , 2011, TIST.
[6] Tong Zhang,et al. Solving large scale linear prediction problems using stochastic gradient descent algorithms , 2004, ICML.
[7] 丸山 徹. Convex Analysisの二,三の進展について , 1977 .
[8] Léon Bottou,et al. Large-Scale Machine Learning with Stochastic Gradient Descent , 2010, COMPSTAT.
[9] Terence Tao,et al. The Dantzig selector: Statistical estimation when P is much larger than n , 2005, math/0506081.
[10] Robert M. Gower,et al. Stochastic Block BFGS: Squeezing More Curvature out of Data , 2016, ICML.
[11] D K Smith,et al. Numerical Optimization , 2001, J. Oper. Res. Soc..
[12] Stephen P. Boyd,et al. Proximal Algorithms , 2013, Found. Trends Optim..
[13] Sara van de Geer,et al. Statistics for High-Dimensional Data: Methods, Theory and Applications , 2011 .
[14] Jieping Ye,et al. A Modified Orthant-Wise Limited Memory Quasi-Newton Method with Convergence Analysis , 2015, ICML.
[15] Jieping Ye,et al. HONOR: Hybrid Optimization for NOn-convex Regularized problems , 2015, NIPS.
[17] Jacek Gondzio,et al. Action constrained quasi-Newton methods , 2014, ArXiv.
[18] Zhihua Zhang,et al. Variance-Reduced Second-Order Methods , 2016, ArXiv.
[19] Alexander J. Smola,et al. Proximal Stochastic Methods for Nonsmooth Nonconvex Finite-Sum Optimization , 2016, NIPS.
[20] Jorge Nocedal,et al. On the limited memory BFGS method for large scale optimization , 1989, Math. Program..
[21] Jorge Nocedal,et al. A Stochastic Quasi-Newton Method for Large-Scale Optimization , 2014, SIAM J. Optim..
[22] R. Tibshirani,et al. Least angle regression , 2004, math/0406456.
[23] Ohad Shamir,et al. Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes , 2012, ICML.
[24] Serge Gratton,et al. On A Class of Limited Memory Preconditioners For Large Scale Linear Systems With Multiple Right-Hand Sides , 2011, SIAM J. Optim..
[25] Aryan Mokhtari,et al. RES: Regularized Stochastic BFGS Algorithm , 2014, IEEE Transactions on Signal Processing.
[26] Tong Zhang,et al. Accelerating Stochastic Gradient Descent using Predictive Variance Reduction , 2013, NIPS.
[27] Tong Zhang,et al. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization , 2013, Mathematical Programming.
[28] Tong Zhang,et al. Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations , 2011, IEEE Transactions on Information Theory.
[29] Yurii Nesterov,et al. Introductory Lectures on Convex Optimization - A Basic Course , 2014, Applied Optimization.
[30] Simon Günter,et al. A Stochastic Quasi-Newton Method for Online Convex Optimization , 2007, AISTATS.
[31] Julien Mairal,et al. Optimization with Sparsity-Inducing Penalties , 2011, Found. Trends Mach. Learn..
[32] Zhihua Zhang,et al. A Proximal Stochastic Quasi-Newton Algorithm , 2016, 1602.00223.
[33] Stephen J. Wright,et al. Numerical Optimization , 2018, Fundamental Statistical Inference.
[34] Jianfeng Gao,et al. Scalable training of L1-regularized log-linear models , 2007, ICML '07.
[35] Shiqian Ma,et al. Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization , 2014, SIAM J. Optim..