Optimistic Dynamic Regret Bounds

Online Learning (OL) algorithms have originally been developed to guarantee good performances when comparing their output to the best fixed strategy. The question of performance with respect to dynamic strategies remains an active research topic. We develop in this work dynamic adaptations of classical OL algorithms based on the use of experts' advice and the notion of optimism. We also propose a constructivist method to generate those advices and eventually provide both theoretical and experimental guarantees for our procedures.

[1]  E. Khorram,et al.  Dynamic Regret of Adaptive Gradient Methods for Strongly Convex Problems , 2022, Optimization.

[2]  Dheeraj Baby,et al.  Optimal Dynamic Regret in Proper Online Learning with Strongly Convex Losses and Beyond , 2022, AISTATS.

[3]  Yu-Xiang Wang,et al.  Non-stationary Online Learning with Memory and Non-stochastic Control , 2021, AISTATS.

[4]  O. Wintenberger Stochastic Online Convex Optimization; Application to probabilistic time series forecasting , 2021, ArXiv.

[5]  Zhi-Hua Zhou,et al.  Dynamic Regret of Convex and Smooth Functions , 2020, NeurIPS.

[6]  Peng Zhao,et al.  Improved Analysis for Dynamic Regret of Strongly Convex and Smooth Functions , 2020, L4DC.

[7]  Yu-Xiang Wang,et al.  Online Forecasting of Total-Variation-bounded Sequences , 2019, NeurIPS.

[8]  Lijun Zhang,et al.  Adaptive Online Learning in Dynamic Environments , 2018, NeurIPS.

[9]  Rong Jin,et al.  Dynamic Regret of Strongly Adaptive Methods , 2017, ICML.

[10]  Jinfeng Yi,et al.  Improved Dynamic Regret for Non-degenerate Functions , 2016, NIPS.

[11]  Elad Hazan,et al.  Introduction to Online Convex Optimization , 2016, Found. Trends Optim..

[12]  Jinfeng Yi,et al.  Tracking Slowly Moving Clairvoyant: Optimal Dynamic Regret of Online Learning with True and Noisy Gradient , 2016, ICML.

[13]  Aryan Mokhtari,et al.  Optimization in Dynamic Environments : Improved Regret Rates for Strongly Convex Problems , 2016 .

[14]  Shahin Shahrampour,et al.  Online Optimization : Competing with Dynamic Comparators , 2015, AISTATS.

[15]  Karthik Sridharan,et al.  Optimization, Learning, and Games with Predictable Sequences , 2013, NIPS.

[16]  Omar Besbes,et al.  Non-Stationary Stochastic Optimization , 2013, Oper. Res..

[17]  Rebecca Willett,et al.  Dynamical Models and tracking regret in online convex programming , 2013, ICML.

[18]  Karthik Sridharan,et al.  Online Learning with Predictable Sequences , 2012, COLT.

[19]  Rong Jin,et al.  25th Annual Conference on Learning Theory Online Optimization with Gradual Variations , 2022 .

[20]  Yoram Singer,et al.  Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..

[21]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[22]  Elad Hazan,et al.  Extracting certainty from uncertainty: regret bounded by variation in costs , 2008, Machine Learning.

[23]  Elad Hazan,et al.  Logarithmic regret algorithms for online convex optimization , 2006, Machine Learning.

[24]  Martin Zinkevich,et al.  Online Convex Programming and Generalized Infinitesimal Gradient Ascent , 2003, ICML.

[25]  R. Pace,et al.  Sparse spatial autoregressions , 1997 .

[26]  Olvi L. Mangasarian,et al.  Nuclear feature extraction for breast tumor diagnosis , 1993, Electronic Imaging.

[27]  Richard S. Johannes,et al.  Using the ADAP Learning Algorithm to Forecast the Onset of Diabetes Mellitus , 1988 .

[28]  David A. Belsley,et al.  Regression Diagnostics: Identifying Influential Data and Sources of Collinearity , 1980 .