Experiences in simulating multi-agent systems using TAEMS

As researchers in multi-agent systems, we hope to build, deploy, and most importantly evaluate multi-agent systems in real, open environments. Unfortunately, working in such environments usually implies we expend significant energy on resolving issues orthogonal to the initial goals of the research, such as dealing with the knowledge engineering and low-level system integration issues. To avoid such overhead many researchers choose to implement, test and evaluate their multi-agent systems in a simulated world. In addition to providing a better-defined and predictable debugging environment, a good simulator can also help evaluate and quantify aspects of multi-agent system and multi-agent coordination in a controlled environment through repeated experiments. We report on experiences in simulating multi-agent systems using TAEMS. TAEMS is a task modeling language that can be used to represent agent activities. It models planned actions, candidate activities, and alternative solution paths from a quantified perspective, by using a task decomposition tree. In this design, the root nodes of the structure, or task groups, represent goals the agent can achieve. Internal nodes, or tasks, represent sub-goals and provide the organizational structure for primitive executable methods, which reside at the leaves of the tree. Each method is characterized along three dimensions: quality, which the agent hopes to maximize, cost, which the agent tries to minimize, and duration, which describes the time required to executing the method. The dimensions themselves are discrete distributions, so each probability/value pair represents a potential result for the characteristic in question.

[1]  A. Drogoul,et al.  Multi-Agent Simulation as a Tool for Modeling Societies: Application to Social Differentiation in Ant Colonies , 1992, MAAMAW.

[2]  Nicholas R. Jennings,et al.  Foundations of distributed artificial intelligence , 1996, Sixth-generation computer technology series.

[3]  Bryan Horling,et al.  A multi-agent system for intelligent environment control , 1998 .

[4]  Nelson Minar,et al.  The Swarm Simulation System: A Toolkit for Building Multi-Agent Simulations , 1996 .

[5]  Adele E. Howe Analyzing Failure Recovery to Improve Planner Design , 1992, AAAI.

[6]  Martha Pollack,et al.  Benchmarks, Testbeds, Controlled Experimentation, and the Design of Agent Architectures Experimentation, and the Design of Agent Architectures , 1993 .

[7]  Victor R. Lesser,et al.  Generalizing the Partial Global Planning Algorithm , 1992, Int. J. Cooperative Inf. Syst..

[8]  Keith S. Decker,et al.  Towards a Distributed, Environment-Centered Agent Framework , 1999, ATAL.

[9]  Jörg P. Müller,et al.  AGenDA—a general testbed for distributed artificial intelligence applications , 1996 .

[10]  Victor R. Lesser,et al.  The Use of Meta-level Information in Learning Situation-Specific Coordination , 1997, IJCAI.

[11]  Martin Andersson,et al.  Leveled Commitment Contracts with Myopic and Strategic Agents , 1998, AAAI/IAAI.

[12]  Victor Lesser,et al.  Quantitative Modeling of Complex Environments , 1993 .

[13]  Katia P. Sycara,et al.  Modeling Information Agents: Advertisement, Organizational Roles, and Dynamic Behavior , 1996, Agent Modeling.

[14]  Jacques Ferber,et al.  Reactive distributed artificial intelligence: principles and applications , 1996 .