Simulation Studies of Multi-armed Bandits with Covariates (Invited Paper)

We evaluate the performance of a number of action-selection methods on the multi-armed bandit problem with covariates. We resort to simulations because our primary concern is the speed with which the different methods identify the optimal policy, and not their asymptotic behaviour. The experimental results show that the performance of the ε-greedy methods is robust, while the interval estimation strategies achieve the fastest learning of the optimal policy. We propose a metric to quantify the difficulty of a multi-armed bandit problem with covariates and show that there is a trade-off between the satisfaction of the different performance measures.