Partial monitoring games are repeated games where the learner receives feedback that might be different from adversary's move or even the reward gained by the learner. Recently, a general model of combinatorial partial monitoring (CPM) games was proposed \cite{lincombinatorial2014}, where the learner's action space can be exponentially large and adversary samples its moves from a bounded, continuous space, according to a fixed distribution. The paper gave a confidence bound based algorithm (GCB) that achieves $O(T^{2/3}\log T)$ distribution independent and $O(\log T)$ distribution dependent regret bounds. The implementation of their algorithm depends on two separate offline oracles and the distribution dependent regret additionally requires existence of a unique optimal action for the learner. Adopting their CPM model, our first contribution is a Phased Exploration with Greedy Exploitation (PEGE) algorithmic framework for the problem. Different algorithms within the framework achieve $O(T^{2/3}\sqrt{\log T})$ distribution independent and $O(\log^2 T)$ distribution dependent regret respectively. Crucially, our framework needs only the simpler "argmax" oracle from GCB and the distribution dependent regret does not require existence of a unique optimal action. Our second contribution is another algorithm, PEGE2, which combines gap estimation with a PEGE algorithm, to achieve an $O(\log T)$ regret bound, matching the GCB guarantee but removing the dependence on size of the learner's action space. However, like GCB, PEGE2 requires access to both offline oracles and the existence of a unique optimal action. Finally, we discuss how our algorithm can be efficiently applied to a CPM problem of practical interest: namely, online ranking with feedback at the top.
[1]
H. Robbins.
Some aspects of the sequential design of experiments
,
1952
.
[2]
John N. Tsitsiklis,et al.
Linearly Parameterized Bandits
,
2008,
Math. Oper. Res..
[3]
Csaba Szepesvári,et al.
Partial Monitoring - Classification, Regret Bounds, and Algorithms
,
2014,
Math. Oper. Res..
[4]
Zheng Wen,et al.
Tight Regret Bounds for Stochastic Combinatorial Semi-Bandits
,
2014,
AISTATS.
[5]
Christian Schindelhauer,et al.
Discrete Prediction Games with Arbitrary Feedback and Loss
,
2001,
COLT/EuroCOLT.
[6]
Nicolò Cesa-Bianchi,et al.
Regret Minimization Under Partial Monitoring
,
2006,
2006 IEEE Information Theory Workshop - ITW '06 Punta del Este.
[7]
Wei Chen,et al.
Combinatorial Partial Monitoring Game with Linear Feedback and Its Applications
,
2014,
ICML.
[8]
Peter Auer,et al.
Finite-time Analysis of the Multiarmed Bandit Problem
,
2002,
Machine Learning.
[9]
Thomas P. Hayes.
A large-deviation inequality for vector-valued martingales
,
2003
.
[10]
R. Agrawal,et al.
Certainty equivalence control with forcing: revisited
,
1990
.
[11]
Wei Chen,et al.
Combinatorial Multi-Armed Bandit: General Framework and Applications
,
2013,
ICML.
[12]
Hiroshi Nakagawa,et al.
Regret Lower Bound and Optimal Algorithm in Finite Stochastic Partial Monitoring
,
2015,
NIPS.
[13]
Ambuj Tewari,et al.
Online Ranking with Top-1 Feedback
,
2014,
AISTATS.
[14]
Wtt Wtt.
Tight Regret Bounds for Stochastic Combinatorial Semi-Bandits
,
2015
.