Improved bounds about on-line learning of smooth-functions of a single variable

We consider the complexity of learning classes of smooth functions formed by bounding different norms of a function's derivative. The learning model is the generalization of the mistake-bound model to continuous-valued functions. Suppose Fq is the set of all absolutely continuous functions f from [0, 1] to R such that ∥f′∥q ≤1, and opt(Fq, m) is the best possible bound on the worst-case sum of absolute prediction errors over sequences of m trials. We show that for all q ≥ 2, opt(Fq, m)=Θ(√log m), and that opt(F2, m)=√log m/2 ±O(1).