暂无分享,去创建一个
[1] S. Kakade,et al. On the duality of strong convexity and strong smoothness : Learning applications and matrix regularization , 2009 .
[2] Ameet Talwalkar,et al. MLlib: Machine Learning in Apache Spark , 2015, J. Mach. Learn. Res..
[3] Ilya Trofimov,et al. Distributed Coordinate Descent for L1-regularized Logistic Regression , 2015, AIST.
[4] S. V. N. Vishwanathan,et al. A Quasi-Newton Approach to Nonsmooth Convex Optimization Problems in Machine Learning , 2008, J. Mach. Learn. Res..
[5] 丸山 徹. Convex Analysisの二,三の進展について , 1977 .
[6] Tyler B. Johnson,et al. Blitz: A Principled Meta-Algorithm for Scaling Sparse Optimization , 2015, ICML.
[7] Shai Shalev-Shwartz,et al. Stochastic dual coordinate ascent methods for regularized loss , 2012, J. Mach. Learn. Res..
[8] Shou-De Lin,et al. A Dual Augmented Block Minimization Framework for Learning with Limited Memory , 2015, NIPS.
[9] Virginia Smith,et al. Distributed Optimization for Non-Strongly Convex Regularizers , 2016 .
[10] Ambuj Tewari,et al. Stochastic methods for l1 regularized loss minimization , 2009, ICML '09.
[11] Chih-Jen Lin,et al. A Comparison of Optimization Methods and Software for Large-scale L1-regularized Linear Classification , 2010, J. Mach. Learn. Res..
[12] Trevor Hastie,et al. Regularization Paths for Generalized Linear Models via Coordinate Descent. , 2010, Journal of statistical software.
[13] Lin Xiao,et al. On the complexity analysis of randomized block-coordinate descent methods , 2013, Mathematical Programming.
[14] Peter Richtárik,et al. SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization , 2015, ICML.
[15] Dongyeop Kang,et al. Data/Feature Distributed Stochastic Coordinate Descent for Logistic Regression , 2014, CIKM.
[16] Tong Zhang,et al. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization , 2013, Mathematical Programming.
[17] Tianbao Yang,et al. Trading Computation for Communication: Distributed Stochastic Dual Coordinate Ascent , 2013, NIPS.
[18] S. Sundararajan,et al. A distributed block coordinate descent method for training $l_1$ regularized linear classifiers , 2014, J. Mach. Learn. Res..
[19] Martin Jaggi,et al. An Equivalence between the Lasso and Support Vector Machines , 2013, ArXiv.
[20] Stephen P. Boyd,et al. Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.
[21] Jianfeng Gao,et al. Scalable training of L1-regularized log-linear models , 2007, ICML '07.
[22] Chia-Hua Ho,et al. An improved GLMNET for l1-regularized logistic regression , 2011, J. Mach. Learn. Res..
[23] Ion Necoara,et al. Parallel Random Coordinate Descent Method for Composite Minimization: Convergence Analysis and Error Bounds , 2016, SIAM J. Optim..
[24] Yuchen Zhang,et al. Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization , 2014, ICML.
[25] Martin Jaggi,et al. Primal-Dual Rates and Certificates , 2016, ICML.
[26] Yurii Nesterov,et al. Smooth minimization of non-smooth functions , 2005, Math. Program..
[27] Stephen P. Boyd,et al. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..
[28] Heinz H. Bauschke,et al. Convex Analysis and Monotone Operator Theory in Hilbert Spaces , 2011, CMS Books in Mathematics.
[29] Joseph K. Bradley,et al. Parallel Coordinate Descent for L1-Regularized Loss Minimization , 2011, ICML.
[30] Michael I. Jordan,et al. Adding vs. Averaging in Distributed Primal-Dual Optimization , 2015, ICML.
[31] Tong Zhang,et al. A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization , 2016, J. Mach. Learn. Res..
[32] Martin Wattenberg,et al. Ad click prediction: a view from the trenches , 2013, KDD.
[33] Simone Forte. Distributed Optimization for Non-Strongly Convex Regularizers , 2015 .
[34] Thomas Hofmann,et al. Communication-Efficient Distributed Dual Coordinate Ascent , 2014, NIPS.
[35] Peter Richtárik,et al. Accelerated, Parallel, and Proximal Coordinate Descent , 2013, SIAM J. Optim..