Norm emergence in spatially constrained interactions

Behavioral norms are key ingredients that allow agent coordination where societal laws do not sufficiently constrain agent behaviors. Whereas social laws need to be enforced in a top-down manner, norms evolve in a bottom-up manner and are typically more selfenforcing. While effective norms can significantly enhance performance of individual agents and agent societies, there has been little work in multiagent systems on the formation of social norms. We have recently used a model that supports the emergence of social norms via learning from interaction experiences. In our model, individual agents repeatedly interact with other agents in the society over instances of a given scenario. Each interaction is framed as a stage game. An agent learns its policy to play the game over repeated interactions with multiple agents. We term this mode of learning social learning, which is distinct from an agent learning from repeated interactions against the same player. We are particularly interested in situations where multiple action combinations yield the same optimal payoff. The key research question is to find out if the entire population learns to converge to a consistent norm. In this extension to our prior work we study the emergence of norms via social learning when agents are physically distributed in an environment and are more likely to interact with agents in their neighborhood than those that are further away. The key new results include the surprising acceleration in learning with limited interaction ranges. We also study the effects of pure-strategy players, i.e., non-learners in the environment.

[1]  H. Young The Economics of Convention , 1996 .

[2]  M. Nowak,et al.  Evolutionary games and spatial chaos , 1992, Nature.

[3]  R. Kirk CONVENTION: A PHILOSOPHICAL STUDY , 1970 .

[4]  Joshua M. Epstein,et al.  Learning to Be Thoughtless: Social Norms and Individual Computation , 2001 .

[5]  Wamberto Weber Vasconcelos,et al.  A rule-based approach to norm-oriented programming of electronic institutions , 2006, SECO.

[6]  David Lewis Convention: A Philosophical Study , 1986 .

[7]  Guido Boella,et al.  Norm governed multiagent systems: the delegation of control to autonomous agents , 2003, IEEE/WIC International Conference on Intelligent Agent Technology, 2003. IAT 2003..

[8]  D. Fudenberg,et al.  The Theory of Learning in Games , 1998 .

[9]  F. Dignum,et al.  From Desires, Obligations and Norms to Goals , 2002 .

[10]  Sandip Sen,et al.  Emergence of Norms through Social Learning , 2007, IJCAI.

[11]  Michihiro Kandori,et al.  Evolution of Equilibria in the Long Run: A General Theory and Applications , 1995 .

[12]  T. Schelling,et al.  The Strategy of Conflict. , 1961 .

[13]  Manuela M. Veloso,et al.  Multiagent learning using a variable learning rate , 2002, Artif. Intell..

[14]  Javier Vázquez-Salceda,et al.  Norms in multiagent systems: from theory to practice , 2005, Comput. Syst. Sci. Eng..

[15]  Peter Dayan,et al.  Q-learning , 1992, Machine Learning.

[16]  H. Young,et al.  The Evolution of Conventions , 1993 .

[17]  Roger B. Myerson,et al.  Game theory - Analysis of Conflict , 1991 .

[18]  Heinz Mühlenbein,et al.  Coordination of Decisions in a Spatial Agent Model , 2001, ArXiv.

[19]  Jeffrey S. Rosenschein,et al.  Cooperation without Communication , 1986, AAAI.

[20]  INGEMAR NORDIN,et al.  A PHILOSOPHICAL STUDY , 2001 .