Convergence Analysis of Nonconvex Distributed Stochastic Zeroth-order Coordinate Method

This paper investigates the stochastic distributed nonconvex optimization problem of minimizing a global cost function formed by the summation of n local cost functions. We solve such a problem by involving zeroth-order (ZO) information exchange. In this paper, we propose a ZO distributed primal–dual coordinate method (ZODIAC) to solve the stochastic optimization problem. Agents approximate their own local stochastic ZO oracle along with coordinates with an adaptive smoothing parameter. We show that the proposed algorithm achieves the convergence rate of O(√p/ √ T ) for general nonconvex cost functions. We demonstrate the efficiency of proposed algorithms through a numerical example in comparison with the existing state-of-the-art centralized and distributed ZO algorithms.

[1]  Shengyuan Xu,et al.  Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms , 2015 .

[2]  Jarvis D. Haupt,et al.  ZEROTH-ORDER STOCHASTIC PROJECTED GRADIENT DESCENT FOR NONCONVEX OPTIMIZATION , 2018, 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP).

[3]  Pramod K. Varshney,et al.  Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box Optimization Framework , 2020, ArXiv.

[4]  Karl Henrik Johansson,et al.  Distributed Optimization for Second-Order Multi-Agent Systems with Dynamic Event-Triggered Communication , 2018, 2018 IEEE Conference on Decision and Control (CDC).

[5]  Cho-Jui Hsieh,et al.  A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order , 2016, NIPS.

[6]  John L. Nazareth,et al.  Introduction to derivative-free optimization , 2010, Math. Comput..

[7]  David E. Cox,et al.  ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization , 2019, NeurIPS.

[8]  Eduard A. Gorbunov,et al.  Derivative-Free Method For Composite Optimization With Applications To Decentralized Distributed Optimization , 2019, IFAC-PapersOnLine.

[9]  Jinfeng Yi,et al.  ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.

[10]  Xinlei Yi,et al.  Linear Convergence for Distributed Optimization Without Strong Convexity , 2020, 2020 59th IEEE Conference on Decision and Control (CDC).

[11]  Shiyu Chang,et al.  Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization , 2018, NeurIPS.

[12]  Tim Hesterberg,et al.  Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control , 2004, Technometrics.

[13]  Charles Audet,et al.  Derivative-Free and Blackbox Optimization , 2017 .

[14]  Mingyi Hong,et al.  signSGD via Zeroth-Order Oracle , 2019, ICLR.

[15]  Na Li,et al.  Distributed Zero-Order Algorithms for Nonconvex Multi-Agent optimization , 2019, 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[16]  Daniel W. C. Ho,et al.  Randomized Gradient-Free Method for Multiagent Optimization Over Time-Varying Networks , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[17]  John A. Nelder,et al.  A Simplex Method for Function Minimization , 1965, Comput. J..

[18]  Anit Kumar Sahu,et al.  Distributed Zeroth Order Optimization Over Random Networks: A Kiefer-Wolfowitz Stochastic Approximation Approach , 2018, 2018 IEEE Conference on Decision and Control (CDC).

[19]  Yan Zhang,et al.  Improving the Convergence Rate of One-Point Zeroth-Order Optimization using Residual Feedback , 2020, ArXiv.

[20]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[21]  Mingyi Hong,et al.  ZONE: Zeroth-Order Nonconvex Multiagent Optimization Over Networks , 2017, IEEE Transactions on Automatic Control.

[22]  Guoqiang Hu,et al.  Randomized Gradient-Free Distributed Optimization Methods for a Multiagent System With Unknown Cost Function , 2020, IEEE Transactions on Automatic Control.

[23]  Robert Hooke,et al.  `` Direct Search'' Solution of Numerical and Statistical Problems , 1961, JACM.

[24]  Martin Jaggi,et al.  Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication , 2019, ICML.

[25]  Yiguang Hong,et al.  Distributed Subgradient-Free Stochastic Optimization Algorithm for Nonsmooth Convex Functions over Time-Varying Networks , 2019, SIAM J. Control. Optim..

[26]  Ziyang Meng,et al.  A survey of distributed optimization , 2019, Annu. Rev. Control..

[27]  Krishnakumar Balasubramanian,et al.  Zeroth-order (Non)-Convex Stochastic Optimization via Conditional Gradient and Gradient Updates , 2018, NeurIPS.

[28]  Angelia Nedic,et al.  Distributed Optimization for Control , 2018, Annu. Rev. Control. Robotics Auton. Syst..

[29]  Flagot Yohannes Derivative free optimization methods , 2012 .

[30]  Saeed Ghadimi,et al.  Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming , 2013, SIAM J. Optim..

[31]  Yi Zhou,et al.  Improved Zeroth-Order Variance Reduced Algorithms and Analysis for Nonconvex Optimization , 2019, ICML.

[32]  Daniel W. C. Ho,et al.  Distributed Randomized Gradient-Free Mirror Descent Algorithm for Constrained Optimization , 2019, IEEE Transactions on Automatic Control.