Distributed Trust-Region Method With First Order Models
暂无分享,去创建一个
[1] Na Li,et al. Harnessing smoothness to accelerate distributed optimization , 2016, 2016 IEEE 55th Conference on Decision and Control (CDC).
[2] Qing Ling,et al. On the Convergence of Decentralized Gradient Descent , 2013, SIAM J. Optim..
[3] Dusan Jakovetic,et al. A Unification, Generalization, and Acceleration of Exact Distributed First Order Methods , 2017, ArXiv.
[4] Qing Ling,et al. Decentralized bundle method for nonsmooth consensus optimization , 2017, 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP).
[5] Asuman E. Ozdaglar,et al. Distributed Subgradient Methods for Multi-Agent Optimization , 2009, IEEE Transactions on Automatic Control.
[6] Natasa Krejic,et al. Distributed second order methods with variable number of working nodes , 2017, ArXiv.
[7] Aryan Mokhtari,et al. Network Newton-Part II: Convergence Rate and Implementation , 2015, 1504.06020.
[8] Katya Scheinberg,et al. Introduction to derivative-free optimization , 2010, Math. Comput..
[9] José M. F. Moura,et al. Fast Distributed Gradient Methods , 2011, IEEE Transactions on Automatic Control.
[10] Stephen J. Wright,et al. Numerical Optimization , 2018, Fundamental Statistical Inference.
[11] Rui Shi,et al. A Stochastic Trust Region Algorithm Based on Careful Step Normalization , 2017, INFORMS J. Optim..
[12] Nicholas I. M. Gould,et al. Trust Region Methods , 2000, MOS-SIAM Series on Optimization.
[13] John Langford,et al. Scaling up machine learning: parallel and distributed approaches , 2011, KDD '11 Tutorials.
[14] Volkan Cevher,et al. Convex Optimization for Big Data: Scalable, randomized, and parallel algorithms for big data analytics , 2014, IEEE Signal Processing Magazine.
[15] Michael G. Rabbat,et al. Consensus-based distributed optimization: Practical issues and applications in large-scale machine learning , 2012, 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton).