Time Varying optimization via Inexact Proximal Online Gradient Descent

We consider the minimization of a time-varying function that comprises of a differentiable and a non-differentiable component. Such functions occur in the context of learning and estimation problems, where the loss function is often differentiable and strongly convex, while the regularizer and the constraints translate to a non-differentiable penalty. Dynamic version of the proximal online gradient descent algorithm is designed that can handle errors in the gradient. The performance of the proposed algorithm is analyzed within the online convex optimization framework and bounds on the dynamic regret are developed. These bounds generalize the existing results on non-differentiable minimization. Further, the inexact results are generalized to propose online algorithms for large-scale problems where the full gradient cannot be calculated at every iteration. Instead, we put forth an online proximal stochastic variance reduced gradient descent algorithm that can work with sampled data. Tests on a robot formation control problem demonstrate the efficacy of the proposed algorithms.

[1]  John C. Duchi,et al.  Stochastic Methods for Composite Optimization Problems , 2017 .

[2]  Dimitri P. Bertsekas,et al.  Incremental proximal methods for large scale convex optimization , 2011, Math. Program..

[3]  D. Bertsekas,et al.  Incremental subgradient methods for nondifferentiable optimization , 1999, Proceedings of the 38th IEEE Conference on Decision and Control (Cat. No.99CH36304).

[4]  Yoram Singer,et al.  Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..

[5]  D. Bertsekas,et al.  Convergen e Rate of In remental Subgradient Algorithms , 2000 .

[6]  Aryan Mokhtari,et al.  Optimization in Dynamic Environments : Improved Regret Rates for Strongly Convex Problems , 2016 .

[7]  Ketan Rajawat,et al.  Tracking Moving Agents via Inexact Online Gradient Descent Algorithm , 2017, IEEE Journal of Selected Topics in Signal Processing.

[8]  Francis Bach,et al.  SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives , 2014, NIPS.

[9]  Jinfeng Yi,et al.  Improved Dynamic Regret for Non-degenerate Functions , 2016, NIPS.

[10]  Angelia Nedic,et al.  On stochastic proximal-point method for convex-composite optimization , 2017, 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[11]  John N. Tsitsiklis,et al.  Gradient Convergence in Gradient methods with Errors , 1999, SIAM J. Optim..

[12]  Angelia Nedic,et al.  Incremental Stochastic Subgradient Algorithms for Convex Optimization , 2008, SIAM J. Optim..

[13]  Martin Zinkevich,et al.  Online Convex Programming and Generalized Infinitesimal Gradient Ascent , 2003, ICML.

[14]  Tie-Yan Liu,et al.  Asynchronous Stochastic Proximal Optimization Algorithms with Variance Reduction , 2016, AAAI.

[15]  Omar Besbes,et al.  Non-Stationary Stochastic Optimization , 2013, Oper. Res..

[16]  Jinfeng Yi,et al.  Tracking Slowly Moving Clairvoyant: Optimal Dynamic Regret of Online Learning with True and Noisy Gradient , 2016, ICML.

[17]  Geert Leus,et al.  On non-differentiable time-varying optimization , 2015, 2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP).

[18]  Rebecca Willett,et al.  Online Convex Optimization in Dynamic Environments , 2015, IEEE Journal of Selected Topics in Signal Processing.

[19]  Lin Xiao,et al.  A Proximal Stochastic Gradient Method with Progressive Variance Reduction , 2014, SIAM J. Optim..

[20]  Ketan Rajawat,et al.  Adaptive Low-Rank Matrix Completion , 2017, IEEE Transactions on Signal Processing.

[21]  John R. Spletzer,et al.  Convex Optimization Strategies for Coordinating Large-Scale Robot Formations , 2007, IEEE Transactions on Robotics.