Agent-Based Social Simulation and Its Necessity for Understanding Socially Embedded Phenomena

Some issues and varieties of computational and other approaches to understanding socially embedded phenomena are discussed. It is argued that of all the approaches currently available, only agent-based simulation holds out the prospect for adequately representing and understanding phenomena such as social norms. Cognitive Simulation Modelling For the last few decades computers have been used to model cognitive processes (e.g. Newell and Simon 1972). That is computer programs are made that allow the simulation of aspects of human cognition. This field has grown over the years in parallel to that of artificial intelligence, which is different because it aims to implement aspects of intelligence using computer programs but not necessarily in the way humans achieve this. Cognitive modelling comes in a number of different purposes, with different levels of realism and pursuing a variety of different goals. However, at least within the field, the usefulness of cognitive modelling is well established, for even if a particular model or simulation turns out to be mistaken (i.e. for the model purpose the brain turns out to works in a significantly different way) having to instantiate an idea about the workings of our cognition forces the model to be: (1) complete (no hidden explanatory gaps) (2) explicit (no vagueness) and (3) feasible (has to be able to be computed within a reasonable amount of time. Thus instantiating a theory in a computation model constrains theory in useful ways. Of course, if the model can be constrained by evidence of how human cognition happens to work, that is even better. Although there have been many architectures and frameworks for cognitive modelling SOAR and ACT-R have attracted the most researchers. Each of these has evolved to become substantial subfields, encompassing a whole host of models. However these are not ideal for capturing social aspects of cognition because: (1) they are quite computationally heavy thus making it difficult to include the interaction of many agents (Ye and Carley 1995) included 3 interacting SOAR agents but it required a separate computer running each one) (2) their input/output facilities are not so well supported and (3) they are overly complex for most social simulation purposes, for example in the synchronisation of agent actions which would require explicit signalling in SOAR/ACT-R. However the main reason is that the researchers involved have been focused on individual cognition and not very concerned with the social interaction of agents, maybe assuming that this is something to be dealt with after having sorted out the cognitive model. More recently agent-technologies, such as BDI (Belief, Desire, Intention) that are specifically inspired by human cognition have been developed, supported by a logic-based approach (Rao and Georgeff 1998) which allows for reasoning about beliefs, desires and intentions to be done by software agents. 1 In the case of BDI (Bratman 1999) Agent-Based Architectures and Frameworks More recently the field of “Software Agents” or “Multi-Agent Systems” has developed its own series of architectures. These can be broadly classified as “cognitive” but the connection between human and agent cognition is very much looser. As in AI there is no necessity that software agents work in the same way as humans do. However there are several reasons why human cognition, and in particular human social cognition remains the primary source for ideas as to the necessary structure and processes in agent cognition. Firstly, effective cognition (that is cognitive structures and processes that allow an agent to operate within its environment in an autonomous manner) is difficult to arrange but is obviously something humans manage to the degree they do. Thus systems inspired by or derived from how humans think are a rich source of ideas for how to endow software agents with commensurate abilities. Thus abilities such as: reasoning, sub-dividing problems, pattern-recognition, associative memory etc. are all sources for implemented and tested agent processes. Secondly, the essentially social problems that an effective agent has to deal with have a lot in common with those humans cope with. Thus issues such as social recognition, trust, reputation, obligation, negotiation, communication, speech acts etc. are all ideas that have a direct application in multi-agent systems. Thus concepts such as trust and obligation have been formalised as part of a framework for understanding what these might mean in the extra-human context of software agents (e.g. Conte and Castelfranchi 1995) and ideas taken from social science have been explicitly applied within distributed computational systems (e.g. Hales and Edmonds 2005). Thirdly, it has been discovered that specifying and designing effective multi-agent systems can be facilitated by an analysis based on social-roles (Wooldridge et al 2000). Thus there are a number of methodologies that use a quasi-social analysis, which identifies roles which agents might fill which are defined in terms of the rights, obligations, protocols etc. which pertain to that role as an aid to the specification of a multi-agent system . As in cognitive modelling, what could work in a social setting does provide a priori constraints upon the possible theories and architectures that might lie behind social norms, clearly in order to understand how human norms might work this is insufficient. However due to the close parallels between the sort of processes used in multi-agent systems and those thought to occur in human social systems, the techniques and technology of multi-agent systems make the ideal tool for analysing the complex and intertwined processes involved in social norms. The Social Intelligence Hypothesis The Social Intelligence Hypothesis (Kummer et al 1997) says that the evolutionary advantage of human intelligence (and to a lesser degree the intelligence of the great apes) lies in the ability to relate in socially sophisticated ways. These social abilities allow humans to cooperate, form and maintain groups, communicate, teach information to the next generation, know who to trust, gossip etc. Together these abilities allow groups of humans to survive in a variety of niches (Tundra, Kalahari desert etc.) where individual humans (even very clever individual humans) could not. They seem to achieve this by the development, maintenance and adaption of group cultures of technologies and social institutions that allow survival in each niche (Reader 1988). Thus this hypothesis combines a plausible theory for the evolutionary advantage of our intelligence as well as explaining many of its unique characteristics. If this hypothesis is true then the social abilities of humans are not merely an “add-on” to our general intelligence nor an outcome of an otherwise evolved intelligence, but are the core and reason for our intelligence (Edmonds and Dautenhahn 1998). Rather it is our “general” intellectual abilities that are the “by-products” of our social intelligence – for example the fundamental utility of language is for communication and speech acts but it also happens to be useful for externalising and formalising reasoning. To understand our intelligence requires understanding its social abilities, abilities that will only make sense in a social context. However, understanding how our abilities work within their social context is very difficult. If one simply observes what people are doing in a social context one can not see the corresponding changes in the cognition of the actors involved; if one carefully determines the cognitive processes in laboratory experiments one misses most of the social context in which the abilities make sense.

[1]  Michael E. Bratman,et al.  Intention, Plans, and Practical Reason , 1991 .

[2]  V. Feldmann,et al.  Virtual worlds of precision. Computer simulation in the sciences and social sciences , 2005 .

[3]  Bruce Edmonds,et al.  Simulation and complexity - how they can relate , 2005 .

[4]  Mark S. Granovetter Economic Action and Social Structure: The Problem of Embeddedness , 1985, American Journal of Sociology.

[5]  Bruce Edmonds,et al.  The Contribution of Society to the Construction of Individual Intelligence , 1998 .

[6]  Bruce Edmonds,et al.  Applying a socially inspired technique (tags) to improve cooperation in P2P networks , 2005, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[7]  Kathleen M. Carley,et al.  The nature of the social agent , 1994 .

[8]  David Hales,et al.  Cooperation without Memory or Space: Tags, Groups and the Prisoner's Dilemma , 2000, MABS.

[9]  Bruce Edmonds,et al.  The Use of Models - Making MABS Actually Work , 2000 .

[10]  T. Kuhn,et al.  The Structure of Scientific Revolutions. , 1964 .

[11]  Gerd Gigerenzer,et al.  The social intelligence hypothesis , 1997 .

[12]  Kathleen M. Carley,et al.  Radar‐soar: Towards an artificial organization composed of intelligent agents* , 1995 .

[13]  G. Nigel Gilbert,et al.  Simulation for the social scientist , 1999 .

[14]  John Reader Man on Earth , 1988 .

[15]  Nicholas R. Jennings,et al.  The Gaia Methodology for Agent-Oriented Analysis and Design , 2000, Autonomous Agents and Multi-Agent Systems.

[16]  R. Conte,et al.  Cognitive and social action , 1995 .

[17]  Sander van der Hoog,et al.  On Multi-Agent Based Simulation , 2004 .

[18]  Allen Newell,et al.  SOAR: An Architecture for General Intelligence , 1987, Artif. Intell..

[19]  Anand S. Rao,et al.  Decision Procedures for BDI Logics , 1998, J. Log. Comput..

[20]  Sabine Maasen,et al.  Human By Nature : Between Biology and the Social Sciences , 1998 .

[21]  Allen Newell,et al.  Human Problem Solving. , 1973 .

[22]  O. Wolkenhauer Why model? , 2013, Front. Physiol..

[23]  R. Giere Explaining Science: A Cognitive Approach , 1991 .

[24]  C Athena Aktipis,et al.  Know when to walk away: contingent movement and the evolution of cooperation. , 2004, Journal of theoretical biology.