On the Generalization Ability of Online Strongly Convex Programming Algorithms
暂无分享,去创建一个
[1] D. Freedman. On Tail Probabilities for Martingales , 1975 .
[2] N. Littlestone. Mistake bounds and logarithmic linear-threshold learning algorithms , 1990 .
[3] Michael Collins,et al. Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms , 2002, EMNLP.
[4] Claudio Gentile,et al. On the generalization ability of on-line learning algorithms , 2001, IEEE Transactions on Information Theory.
[5] Tong Zhang. Data Dependent Concentration Bounds for Sequential Prediction Algorithms , 2005, COLT.
[6] Adam Tauman Kalai,et al. Logarithmic Regret Algorithms for Online Convex Optimization , 2006, COLT.
[7] Nathan Ratliff,et al. Online) Subgradient Methods for Structured Prediction , 2007 .
[8] Yoram Singer,et al. Pegasos: primal estimated sub-gradient solver for SVM , 2007, ICML '07.
[9] Elad Hazan,et al. Logarithmic regret algorithms for online convex optimization , 2006, Machine Learning.
[10] Claudio Gentile,et al. Improved Risk Tail Bounds for On-Line Algorithms , 2005, IEEE Transactions on Information Theory.
[11] Sham M. Kakade,et al. Mind the Duality Gap: Logarithmic regret algorithms for online optimization , 2008, NIPS.
[12] Slobodan Vucetic,et al. Online Passive-Aggressive Algorithms on a Budget , 2010, AISTATS.