Budgeted Prediction with Expert Advice
暂无分享,去创建一个
Deepak S. Turaga | Gerald Tesauro | Kareem Amin | Satyen Kale | G. Tesauro | Satyen Kale | Kareem Amin | D. Turaga
[1] David Haussler,et al. How to use expert advice , 1993, STOC.
[2] Shai Ben-David,et al. Learning with restricted focus of attention , 1993, COLT '93.
[3] Peter Auer,et al. The Nonstochastic Multiarmed Bandit Problem , 2002, SIAM J. Comput..
[4] Vijay V. Vazirani,et al. Approximation Algorithms , 2001, Springer Berlin Heidelberg.
[5] Horst Bischof,et al. Real-Time Tracking via On-line Boosting , 2006, BMVC.
[6] Ohad Shamir,et al. Efficient Learning with Partially Observed Attributes , 2010, ICML.
[7] Thierry Bertin-Mahieux,et al. The Million Song Dataset , 2011, ISMIR.
[8] Archie C. Chapman,et al. Knapsack Based Optimal Policies for Budget-Limited Multi-Armed Bandits , 2012, AAAI.
[9] Sanjeev Arora,et al. The Multiplicative Weights Update Method: a Meta-Algorithm and Applications , 2012, Theory Comput..
[10] Elad Hazan,et al. Linear Regression with Limited Observation , 2011, ICML.
[11] Aleksandrs Slivkins,et al. Bandits with Knapsacks , 2013, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science.
[12] Noga Alon,et al. From Bandits to Experts: A Tale of Domination and Independence , 2013, NIPS.
[13] Yi Wu,et al. Online Object Tracking: A Benchmark , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.
[14] Russell Greiner,et al. Online Learning with Costly Features and Labels , 2013, NIPS.
[15] Satyen Kale,et al. Multiarmed Bandits With Limited Expert Advice , 2013, COLT.
[16] Koby Crammer,et al. Prediction with Limited Advice and Multiarmed Bandits with Paid Observations , 2014, ICML.