History-dependent graphical multiagent models

A dynamic model of a multiagent system defines a probability distribution over possible system behaviors over time. Alternative representations for such models present tradeoffs in expressive power, and accuracy and cost for inferential tasks of interest. In a history-dependent representation, behavior at a given time is specified as a probabilistic function of some portion of system history. Models may be further distinguished based on whether they specify individual or joint behavior. Joint behavior models are more expressive, but in general grow exponentially in number of agents. Graphical multiagent models (GMMs) provide a more compact representation of joint behavior, when agent interactions exhibit some local structure. We extend GMMs to condition on history, thus supporting inference about system dynamics. To evaluate this hGMM representation we study a voting consensus scenario, where agents on a network attempt to reach a preferred unanimous vote through a process of smooth fictitious play. We induce hGMMs and individual behavior models from example traces, showing that the former provide better predictions, given limited history information. These hGMMs also provide advantages for answering general inference queries compared to sampling the true generative model.

[1]  Mark S. Granovetter Threshold Models of Collective Behavior , 1978, American Journal of Sociology.

[2]  Dilip Mookherjee,et al.  Learning behavior in an experimental matching pennies game , 1994 .

[3]  Moshe Tennenholtz,et al.  On the Emergence of Social Conventions: Modeling, Analysis, and Simulations , 1997, Artif. Intell..

[4]  D. Fudenberg,et al.  The Theory of Learning in Games , 1998 .

[5]  Colin Camerer,et al.  Experience‐weighted Attraction Learning in Normal Form Games , 1999 .

[6]  W. Freeman,et al.  Generalized Belief Propagation , 2000, NIPS.

[7]  Michael L. Littman,et al.  Graphical Models for Game Theory , 2001, UAI.

[8]  Daphne Koller,et al.  Multi-Agent Influence Diagrams for Representing and Solving Games , 2001, IJCAI.

[9]  Christos H. Papadimitriou,et al.  Computing pure nash equilibria in graphical games via markov random fields , 2006, EC '06.

[10]  J. Kleinberg Algorithmic Game Theory: Cascading Behavior in Networks: Algorithmic and Economic Issues , 2007 .

[11]  Simon Parsons,et al.  What evolutionary game theory tells us about multiagent learning , 2007, Artif. Intell..

[12]  Colin Camerer,et al.  Experienced-Weighted Attraction Learning in Normal Form Games , 2007 .

[13]  Michael P. Wellman,et al.  Knowledge Combination in Graphical Multiagent Models , 2008, UAI.

[14]  Michael Kearns,et al.  Biased Voting and the Democratic Primary Problem , 2008, WINE.

[15]  Ya'akov Gal,et al.  Networks of Influence Diagrams: A Formalism for Representing Agents' Beliefs and Decision-Making Processes , 2008, J. Artif. Intell. Res..

[16]  Michael Kearns,et al.  Learning from Collective Behavior , 2008, COLT.

[17]  Matthias Rauterberg,et al.  State-coupled replicator dynamics , 2009, AAMAS.

[18]  Learning Graphical Game Models , 2009, IJCAI.

[19]  Michael P. Wellman,et al.  Designing an Ad Auctions Game for the Trading Agent Competition , 2009, AMEC/TADA.

[20]  J. Stephen Judd,et al.  Behavioral experiments on biased voting in networks , 2009, Proceedings of the National Academy of Sciences.

[21]  Joris M. Mooij,et al.  libDAI: A Free and Open Source C++ Library for Discrete Approximate Inference in Graphical Models , 2010, J. Mach. Learn. Res..

[22]  Kevin Leyton-Brown,et al.  Action-Graph Games , 2011, Games Econ. Behav..