暂无分享,去创建一个
[1] Chih-Jen Lin,et al. LIBSVM: A library for support vector machines , 2011, TIST.
[2] Michael I. Jordan,et al. L1-Regularized Distributed Optimization: A Communication-Efficient Primal-Dual Framework , 2015, ArXiv.
[3] Anthony T. Chronopoulos,et al. Parallel Iterative S-Step Methods for Unsymmetric Linear Systems , 1996, Parallel Comput..
[4] Michael I. Jordan,et al. CoCoA: A General Framework for Communication-Efficient Distributed Optimization , 2016, J. Mach. Learn. Res..
[5] Jack Dongarra,et al. Applied Mathematics Research for Exascale Computing , 2014 .
[6] Guillermo Sapiro,et al. Online dictionary learning for sparse coding , 2009, ICML '09.
[7] Emmanuel J. Candès,et al. Templates for convex cone problems with applications to sparse signal recovery , 2010, Math. Program. Comput..
[8] Mark W. Schmidt,et al. Learning Graphical Model Structure Using L1-Regularization Paths , 2007, AAAI.
[9] Marc Teboulle,et al. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems , 2009, SIAM J. Imaging Sci..
[10] I. Daubechies,et al. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint , 2003, math/0307152.
[11] Stephen J. Wright,et al. Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent , 2011, NIPS.
[12] Michael A. Saunders,et al. Proximal Newton-type methods for convex optimization , 2012, NIPS.
[13] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[14] Lin Xiao,et al. A Proximal Stochastic Gradient Method with Progressive Variance Reduction , 2014, SIAM J. Optim..
[15] Samuel H. Fuller,et al. Computing Performance: Game Over or Next Level? , 2011, Computer.
[16] Jorge Nocedal,et al. Optimization Methods for Large-Scale Machine Learning , 2016, SIAM Rev..
[17] Le Song,et al. CA-SVM: Communication-Avoiding Support Vector Machines on Distributed Systems , 2015, 2015 IEEE International Parallel and Distributed Processing Symposium.
[18] Ramesh Subramonian,et al. LogP: towards a realistic model of parallel computation , 1993, PPOPP '93.
[19] Chris J. Scheiman,et al. LogGP: incorporating long messages into the LogP model—one step closer towards a realistic model for parallel computation , 1995, SPAA '95.
[20] Nancy Wilkins-Diehr,et al. XSEDE: Accelerating Scientific Discovery , 2014, Computing in Science & Engineering.
[21] Fan Zhou,et al. On the convergence properties of a K-step averaging stochastic gradient descent algorithm for nonconvex optimization , 2017, IJCAI.
[22] James Demmel,et al. Avoiding communication in primal and dual block coordinate descent methods , 2016, SIAM J. Sci. Comput..
[23] Zheng Chen,et al. P-packSVM: Parallel Primal grAdient desCent Kernel SVM , 2009, 2009 Ninth IEEE International Conference on Data Mining.
[24] Bor-Yiing Su,et al. Robust Large-Scale Machine Learning in the Cloud , 2016, KDD.
[25] BoydStephen,et al. An Interior-Point Method for Large-Scale l1-Regularized Logistic Regression , 2007 .
[26] Anthony T. Chronopoulos,et al. On the efficient implementation of preconditioned s-step conjugate gradient methods on multiprocessors with memory hierarchy , 1989, Parallel Comput..
[27] Atsushi Nitanda,et al. Stochastic Proximal Gradient Descent with Acceleration Techniques , 2014, NIPS.
[28] James Demmel,et al. Communication lower bounds and optimal algorithms for numerical linear algebra*† , 2014, Acta Numerica.
[29] Stephen P. Boyd,et al. An Interior-Point Method for Large-Scale l1-Regularized Logistic Regression , 2007, J. Mach. Learn. Res..
[30] Erin Carson,et al. Communication-Avoiding Krylov Subspace Methods in Theory and Practice , 2015 .
[31] James Demmel,et al. Avoiding Communication in Nonsymmetric Lanczos-Based Krylov Subspace Methods , 2013, SIAM J. Sci. Comput..