Lower Bounds on Individual Sequence Regret

In this work, we lower bound the individual sequence anytime regret of a large family of online algorithms. This bound depends on the quadratic variation of the sequence, QT, and the learning rate. Nevertheless, we show that any learning rate that guarantees a regret upper bound of $O(\sqrt{Q_T})$ necessarily implies an $\Omega(\sqrt{Q_T})$ anytime regret on any sequence with quadratic variation QT. The algorithms we consider are linear forecasters whose weight vector at time t+1 is the gradient of a concave potential function of cumulative losses at time t. We show that these algorithms include all linear Regularized Follow the Leader algorithms. We prove our result for the case of potentials with negative definite Hessians, and potentials for the best expert setting satisfying some natural regularity conditions. In the best expert setting, we give our result in terms of the translation-invariant relative quadratic variation. We apply our lower bounds to Randomized Weighted Majority and to linear cost Online Gradient Descent. We show that bounds on anytime regret imply a lower bound on the price of "at the money" call options in an arbitrage-free market. Given a lower bound Q on the quadratic variation of a stock price, we give an $\Omega(\sqrt{Q})$ lower bound on the option price, for Q<0.5. This lower bound has the same asymptotic behavior as the Black-Scholes pricing and improves a previous Ω(Q) result given in [4].