Online distributed optimization via dual averaging

This paper presents a regret analysis on a distributed online optimization problem computed over a network of agents. The goal is to distributively optimize a global objective function which can be decomposed into the summation of convex cost functions associated with each agent. Since the agents face uncertainties in the environment, their cost functions change at each time step. We extend a distributed algorithm based on dual subgradient averaging to the online setting. The proposed algorithm yields an upper bound on regret as a function of the underlying network topology, specifically its connectivity. The regret of an algorithm is the difference between the cost of the sequence of decisions generated by the algorithm and the performance of the best fixed decision in hindsight. A model for distributed sensor estimation is proposed and the corresponding simulation results are presented.

[1]  Asuman E. Ozdaglar,et al.  Distributed Subgradient Methods for Multi-Agent Optimization , 2009, IEEE Transactions on Automatic Control.

[2]  Sébastien Bubeck,et al.  Introduction to Online Optimization , 2011 .

[3]  Elad Hazan The convex optimization approach to regret minimization , 2011 .

[4]  Lin Xiao,et al.  Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization , 2009, J. Mach. Learn. Res..

[5]  Shilpa Chakravartula,et al.  Complex Networks: Structure and Dynamics , 2014 .

[6]  Elad Hazan,et al.  Logarithmic regret algorithms for online convex optimization , 2006, Machine Learning.

[7]  Shai Shalev-Shwartz,et al.  Online Learning and Online Convex Optimization , 2012, Found. Trends Mach. Learn..

[8]  Robert J. Plemmons,et al.  Nonnegative Matrices in the Mathematical Sciences , 1979, Classics in Applied Mathematics.

[9]  John C. Duchi,et al.  Distributed delayed stochastic optimization , 2011, 2012 IEEE 51st IEEE Conference on Decision and Control (CDC).

[10]  Martin Zinkevich,et al.  Online Convex Programming and Generalized Infinitesimal Gradient Ascent , 2003, ICML.

[11]  Yurii Nesterov,et al.  Primal-dual subgradient methods for convex problems , 2005, Math. Program..

[12]  M. Raginsky,et al.  Decentralized Online Convex Programming with local information , 2011, Proceedings of the 2011 American Control Conference.

[13]  Angelia Nedic,et al.  Incremental Stochastic Subgradient Algorithms for Convex Optimization , 2008, SIAM J. Optim..

[14]  Angelia Nedic,et al.  Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization , 2008, J. Optim. Theory Appl..

[15]  Feng Yan,et al.  Distributed Autonomous Online Learning: Regrets and Intrinsic Privacy-Preserving Properties , 2010, IEEE Transactions on Knowledge and Data Engineering.

[16]  Niloy Ganguly,et al.  Dynamics On and Of Complex Networks: Applications to Biology, Computer Science, and the Social Sciences , 2009 .

[17]  Martin J. Wainwright,et al.  Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling , 2010, IEEE Transactions on Automatic Control.

[18]  Mehran Mesbahi,et al.  Stability analysis of nonlinear networks via M-matrix theory: Beyond linear consensus , 2012, 2012 American Control Conference (ACC).

[19]  Robin Wilson,et al.  Modern Graph Theory , 2013 .

[20]  Magnus Egerstedt,et al.  Graph Theoretic Methods in Multiagent Networks , 2010, Princeton Series in Applied Mathematics.

[21]  Angelia Nedic,et al.  Distributed Random Projection Algorithm for Convex Optimization , 2012, IEEE Journal of Selected Topics in Signal Processing.

[22]  Asuman E. Ozdaglar,et al.  Distributed Subgradient Methods for Convex Optimization Over Random Networks , 2011, IEEE Transactions on Automatic Control.