Robust Collaborative Online Learning

We consider a collaborative variant of prediction with expert advice. In each round, one user wishes to make a prediction, and must choose which expert to follow. The users would like to share their experiences in order to learn faster--ideally they could amortize the same total regret cross the whole community of users. However, some of the users may behave maliciously, distorting their reported payoffs in order to manipulate the honest users of the system. And even if all users behave honestly, different experts may perform better for different users, such that sharing data can be counterproductive. We present a robust collaborative algorithm for prediction with expert advice, which guarantees that every subset of users H perform nearly as well as if they had shared all of their data, and ignored all data from users outside of H. This algorithm limits the damage done by the dishonest users to O(root(T)), compared to the O(T) we would obtain by naively aggregating data. We also extend our results to general online convex optimization. The resulting algorithm achieves low regret, but is computationally intractable. This demonstrates that there is no statistical obstruction to generalizing robust collaborative online learning, but leaves the design of efficient algorithms as an open problem.