Distributed Trust-Region Method With First Order Models

In this paper, we introduce the trust region concept for distributed optimization. A large class of globally convergent methods of this type is used efficiently in centralized optimization, both constrained and unconstrained. The methods of this class are built on the idea of modeling the objective function at each iteration and taking the new iteration as the minimizer of the model in a certain area, called the trust region. The trust region size, the minimization method and the model function depend on the properties of the objective function. In this paper we propose a general framework and concentrate on the first order methods, i.e., the gradient methods. Using the trust-region mechanism for generating the step size we end up with a fully distributed method with node varying step sizes. Numerical results presented in the paper demonstrate the efficiency of the proposed approach.

[1]  Na Li,et al.  Harnessing smoothness to accelerate distributed optimization , 2016, 2016 IEEE 55th Conference on Decision and Control (CDC).

[2]  Qing Ling,et al.  On the Convergence of Decentralized Gradient Descent , 2013, SIAM J. Optim..

[3]  Dusan Jakovetic,et al.  A Unification, Generalization, and Acceleration of Exact Distributed First Order Methods , 2017, ArXiv.

[4]  Qing Ling,et al.  Decentralized bundle method for nonsmooth consensus optimization , 2017, 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP).

[5]  Asuman E. Ozdaglar,et al.  Distributed Subgradient Methods for Multi-Agent Optimization , 2009, IEEE Transactions on Automatic Control.

[6]  Natasa Krejic,et al.  Distributed second order methods with variable number of working nodes , 2017, ArXiv.

[7]  Aryan Mokhtari,et al.  Network Newton-Part II: Convergence Rate and Implementation , 2015, 1504.06020.

[8]  Katya Scheinberg,et al.  Introduction to derivative-free optimization , 2010, Math. Comput..

[9]  José M. F. Moura,et al.  Fast Distributed Gradient Methods , 2011, IEEE Transactions on Automatic Control.

[10]  Stephen J. Wright,et al.  Numerical Optimization , 2018, Fundamental Statistical Inference.

[11]  Rui Shi,et al.  A Stochastic Trust Region Algorithm Based on Careful Step Normalization , 2017, INFORMS J. Optim..

[12]  Nicholas I. M. Gould,et al.  Trust Region Methods , 2000, MOS-SIAM Series on Optimization.

[13]  John Langford,et al.  Scaling up machine learning: parallel and distributed approaches , 2011, KDD '11 Tutorials.

[14]  Volkan Cevher,et al.  Convex Optimization for Big Data: Scalable, randomized, and parallel algorithms for big data analytics , 2014, IEEE Signal Processing Magazine.

[15]  Michael G. Rabbat,et al.  Consensus-based distributed optimization: Practical issues and applications in large-scale machine learning , 2012, 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton).