Open Problem: Better Bounds for Online Logistic Regression
暂无分享,去创建一个
Known algorithms applied to online logistic regression on a feasible set of L2 diameter D achieve regret bounds like O(e log T ) in one dimension, but we show a bound of O( √ D + log T ) is possible in a binary 1-dimensional problem. Thus, we pose the following question: Is it possible to achieve a regret bound for online logistic regression that is O(poly(D) log(T ))? Even if this is not possible in general, it would be interesting to have a bound that reduces to our bound in the one-dimensional case.
[1] Matthew J. Streeter,et al. Adaptive Bound Optimization for Online Convex Optimization , 2010, COLT 2010.
[2] Martin Zinkevich,et al. Online Convex Programming and Generalized Infinitesimal Gradient Ascent , 2003, ICML.
[3] Elad Hazan,et al. Logarithmic regret algorithms for online convex optimization , 2006, Machine Learning.