On Evolving Fixed Pattern Strategies for Iterated Prisoner's Dilemma

This paper describes the social evolution of an environment where all individuals are repeating patterns of behaviour. The paper follows Axelrod's work [1] of computer simulations of Iterated Prisoner's Dilemma (IPD), which is widely regarded as a standard model for the evolution of cooperation. Previous studies by Axelrod [2], Hirshleifer and Coll [3], Lindgren [4], Fogel [5], Darwen and Yao [6] focused on strategies that are history dependent. In other words, these strategies use the outcome of the opponent's past game history in making a decision on a given move. This includes the most well-known strategy, tit-for-tat.The way strategies are encoded in the computer program reflects the model's assumption concerning individual decision-making. In this paper, we study environments where all players are simply repeating patterns of behaviour without using past game history. In doing so, a genetic algorithm is used to evolve such strategies in a co-evolution environment. Simulations indicate that such an environment is harmful to the evolution of cooperation.

[1]  G. B. Fogel,et al.  Ecological Applications of Evolutionary Computation , 2006 .

[2]  David P. Kraines,et al.  Learning to cooperate with Pavlov an adaptive strategy for the iterated Prisoner's Dilemma with noise , 1993 .

[3]  Kristian Lindgren,et al.  Evolutionary phenomena in simple dynamics , 1992 .

[4]  Tom V. Mathew Genetic Algorithm , 2022 .

[5]  W. Hamilton,et al.  The evolution of cooperation. , 1984, Science.

[6]  M. Nowak,et al.  Evolutionary games and spatial chaos , 1992, Nature.

[7]  Robert Hoffmann,et al.  Twenty Years on: The Evolution of Cooperation Revisited , 2000, J. Artif. Soc. Soc. Simul..

[8]  玄 光男 The Australia-Japan Joint Workshop on Intelligent and Evolutionary Systemsの報告 , 1998 .

[9]  B. Brembs Chaos, cheating and cooperation: potential solutions to the Prisoner's Dilemma , 1996 .

[10]  Xin Yao,et al.  Why more choices cause less cooperation in iterated prisoner's dilemma , 2001, Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No.01TH8546).

[11]  Robert Hoffmann,et al.  The Independent Localisations of Interaction and Learning in the Repeated Prisoner's Dilemma , 1999 .

[12]  John H. Holland,et al.  Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence , 1992 .

[13]  Xiaodong Li,et al.  Emergence of cooperation in the IPD game using spatial interactions , 2003 .

[14]  John Holland,et al.  Adaptation in Natural and Artificial Sys-tems: An Introductory Analysis with Applications to Biology , 1975 .

[15]  David B. Fogel,et al.  Evolving Behaviors in the Iterated Prisoner's Dilemma , 1993, Evolutionary Computation.

[16]  J. Hirshleifer,et al.  What Strategies Can Support the Evolutionary Emergence of Cooperation? , 1988 .

[17]  D. Kraines,et al.  Evolution of Learning among Pavlov Strategies in a Competitive Environment with Noise , 1995 .

[18]  Xin Yao,et al.  On Evolving Robust Strategies for Iterated Prisoner's Dilemma , 1993, Evo Workshops.

[19]  Thomas Bäck,et al.  Evolutionary computation: comments on the history and current state , 1997, IEEE Trans. Evol. Comput..

[20]  K. Lindgren,et al.  Evolutionary dynamics of spatial games , 1994 .

[21]  Patrick J. Sutton,et al.  Genetic algorithms: A general search procedure , 1994 .