Aggregation Under Bias: Rényi Divergence Aggregation and Its Implementation via Machine Learning Markets

Trading in information markets, such as machine learning markets, has been shown to be an effective approach for aggregating the beliefs of different agents. In a machine learning context, aggregation commonly uses forms of linear opinion pools, or logarithmic (log) opinion pools. It is interesting to relate information market aggregation to the machine learning setting. In this paper we introduce a spectrum of compositional methods, Renyi divergence aggregators, that interpolate between log opinion pools and linear opinion pools. We show that these compositional methods are maximum entropy distributions for aggregating information from agents subject to individual biases, with the Renyi divergence parameter dependent on the bias. In the limit of no bias this reduces to the optimal limit of log opinion pools. We demonstrate this relationship practically on both simulated and real datasets. We then return to information markets and show that Renyi divergence aggregators are directly implemented by machine learning markets with isoelastic utilities, and so can result from autonomous self interested decision making by individuals contributing different predictors. The risk averseness of the isoelastic utility directly relates to the Renyi divergence parameter, and hence encodes how much an agent believes (s)he may be subject to an individual bias that could affect the trading outcome: if an agent believes (s)he might be acting on significantly biased information, a more risk averse isoelastic utility is warranted.

[1]  J. H. Hateren,et al.  Independent component filters of natural images compared with simple cells in primary visual cortex , 1998 .

[2]  Amos J. Storkey,et al.  Isoelastic Agents and Wealth Updates in Machine Learning Markets , 2012, ICML.

[3]  Mark Rubinstein,et al.  Securities Market Efficiency in an Arrow-Debre Economy , 1973 .

[4]  ZissermanAndrew,et al.  The Pascal Visual Object Classes Challenge , 2015 .

[5]  Richard Cole,et al.  Fast-converging tatonnement algorithms for one-time and ongoing market problems , 2008, STOC.

[6]  Thomas G. Dietterich Multiple Classifier Systems , 2000, Lecture Notes in Computer Science.

[7]  Tom Heskes,et al.  Selecting Weighting Factors in Logarithmic Opinion Pools , 1997, NIPS.

[8]  Jennifer Wortman Vaughan,et al.  A new understanding of prediction markets via no-regret learning , 2010, EC '10.

[9]  Nathan Lay,et al.  Supervised Aggregation of Classifiers using Artificial Prediction Markets , 2010, ICML.

[10]  Amos Storkey,et al.  When Training and Test Sets are Different: Characterising Learning Transfer , 2013 .

[11]  Cordelia Schmid,et al.  The 2005 PASCAL Visual Object Classes Challenge , 2005, MLCW.

[12]  Nathan Lay,et al.  An introduction to artificial prediction markets for classification , 2011, J. Mach. Learn. Res..

[13]  Pedrito Maynard-Reid,et al.  Aggregating Learned Probabilistic Beliefs , 2001, UAI.

[14]  R. Cole,et al.  Fast-Converging Tatonnement Algorithms for the Market Problem , 2007 .

[15]  VP Jim Bennett The $ 1 Million Netflix Challenge , 2007 .

[16]  Anthony Goldbloom,et al.  Data Prediction Competitions -- Far More than Just a Bit of Fun , 2010, 2010 IEEE International Conference on Data Mining Workshops.

[17]  Mark Rubinstein,et al.  THE STRONG CASE FOR THE GENERALIZED LOGARITHMIC UTILITY MODEL AS THE PREMIER MODEL OF FINANCIAL MARKETS , 1976 .

[18]  Marco Ottaviani,et al.  Aggregation of Information and Beliefs in Prediction Markets , 2007 .

[19]  Michael P. Wellman,et al.  Representing Aggregate Belief through the Competitive Equilibrium of a Securities Market , 1997, UAI.

[20]  David H. Wolpert,et al.  Stacked generalization , 1992, Neural Networks.

[21]  T. S. Jayram,et al.  Generalized Opinion Pooling , 2004, ISAIM.

[22]  Yehuda Koren,et al.  All Together Now: A Perspective on the Netflix Prize , 2010 .

[23]  Joseph M. Kahn,et al.  A Generative Bayesian Model for Aggregating Experts' Probabilities , 2004, UAI.

[24]  Amos J. Storkey,et al.  Machine Learning Markets , 2011, AISTATS.

[25]  David M. Pennock,et al.  An Empirical Comparison of Algorithms for Aggregating Expert Predictions , 2006, UAI.

[26]  Mark Rubinstein,et al.  An aggregation theorem for securities markets , 1974 .

[27]  Pedro M. Domingos Why Does Bagging Work? A Bayesian Account and its Implications , 1997, KDD.

[28]  Franz Dietrich,et al.  Bayesian group belief , 2010, Soc. Choice Welf..

[29]  Jacob D. Abernethy,et al.  A Collaborative Mechanism for Crowdsourcing Prediction Problems , 2011, NIPS.

[30]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.