The construction of a Multi-Agent System (MAS) is diicult because the design process takes place in a space with many dimensions. That is, a MAS must control a system taking into account a wide variety of constraints, it must establish cooperation between its constituent agents each having their own goals and diierent views on the world, and it must be working in an environment in which the agent society itself changes over time. To some extent, it is possible to solve each of these problems in isolation; in combination, however, no satisfactory solution is known. What we would like is a design formalism in which it is possible to tackle the problems in an integral way. Our approach is to view a MAS as a collection of decision makers operating on a dynamic system. This allows us to specify agents that take multiple criteria into account and that can operate in a changing environment. To coordinate agents, we need a formalism in which it is possible to reason over the behavior of agents. The outcome of coordination processes can be considered as a dynamic system or a plan operating on such a system. Coordination should be analyzed in a framework (e.g., an extension of the KARO formalism) where it is possible to talk about behavior of agents at various levels of detail. However, we believe that much more research is needed here, and this is the subject of our second topic in this paper. We describe a simulation environment that allows us to test solutions to distributed control of a large class of dynamic systems.
[1]
P. Pandurang Nayak,et al.
Immobile Robots AI in the New Millennium
,
1996,
AI Mag..
[2]
Peter Norvig,et al.
How to Make Software Agents Do the Right Thing: An Introduction to Reinforcement Learning
,
1996
.
[3]
Andrew G. Barto,et al.
Learning to Act Using Real-Time Dynamic Programming
,
1995,
Artif. Intell..
[4]
Gul Agha,et al.
A actor-based architecture for customizing and controlling agent ensembles
,
1999,
IEEE Intell. Syst..
[5]
Ariel Rubinstein,et al.
A Course in Game Theory
,
1995
.