Lipschitz and Comparator-Norm Adaptivity in Online Learning

We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient are constrained. The goal is to simultaneously adapt to both the sequence of gradients and the comparator. We first develop parameter-free and scale-free algorithms for a simplified setting with hints. We present two versions: the first adapts to the squared norms of both comparator and gradients separately using $O(d)$ time per round, the second adapts to their squared inner products (which measure variance only in the comparator direction) in time $O(d^3)$ per round. We then generalize two prior reductions to the unbounded setting; one to not need hints, and a second to deal with the range ratio problem (which already arises in prior work). We discuss their optimality in light of prior and new lower bounds. We apply our methods to obtain sharper regret bounds for scale-invariant online prediction with linear models.

[1]  Matthew J. Streeter,et al.  Adaptive Bound Optimization for Online Convex Optimization , 2010, COLT 2010.

[2]  Ashok Cutkosky,et al.  Artificial Constraints and Hints for Unbounded Online Learning , 2019, COLT.

[3]  Francesco Orabona,et al.  Black-Box Reductions for Parameter-free Online Learning in Banach Spaces , 2018, COLT.

[4]  Wouter M. Koolen,et al.  MetaGrad: Multiple Learning Rates in Online Learning , 2016, NIPS.

[5]  Ashok Cutkosky,et al.  Online Learning Without Prior Information , 2017, COLT.

[6]  John Langford,et al.  Normalized Online Learning , 2013, UAI.

[7]  Wojciech Kotlowski,et al.  Scale-Invariant Unconstrained Online Learning , 2017, ALT.

[8]  David Haussler,et al.  How to use expert advice , 1993, STOC.

[9]  Leon Hirsch,et al.  Fundamentals Of Convex Analysis , 2016 .

[10]  Karthik Sridharan,et al.  Online Learning: Sufficient Statistics and the Burkholder Method , 2018, COLT.

[11]  Olivier Wintenberger,et al.  Optimal learning with Bernstein online aggregation , 2014, Machine Learning.

[12]  Francesco Orabona,et al.  Unconstrained Online Linear Learning in Hilbert Spaces: Minimax Algorithms and Normal Approximations , 2014, COLT.

[13]  Karthik Sridharan,et al.  ZigZag: A New Approach to Adaptive Online Learning , 2017, COLT.

[14]  Wojciech Kotlowski,et al.  Adaptive scale-invariant online algorithms for learning linear models , 2019, ICML.

[15]  Francesco Orabona,et al.  Coin Betting and Parameter-Free Online Learning , 2016, NIPS.

[16]  Francesco Orabona,et al.  Open Problem: Parameter-Free and Scale-Free Online Algorithms , 2016, COLT.

[17]  Elad Hazan,et al.  Introduction to Online Convex Optimization , 2016, Found. Trends Optim..

[18]  Wouter M. Koolen,et al.  Lipschitz Adaptivity with Multiple Learning Rates in Online Learning , 2019, COLT.