Collaborative Goal Distribution in Distributed Multiagent Systems

Distributed multiagent systems consist of multiple agents which perform related tasks. In this kind of system, the tasks are distributed amongst the agents by an operator based on shared information. The information used to assign tasks includes not only agent's capability, but also agent's state, the goal's state, and conditions from the surrounding environments. Distributed multi agent systems are usually constrained by uncertain information about nearby agents, and by limited network availability to transfer information to the operator. Given these constraints of using an operator, a better designed system might allow agents to distribute tasks on their own. This paper proposes a goal distribution strategy for collaborative distributed multi agent systems where agents distribute tasks amongst themselves. In this strategy, a goal model is shared amongst all participating agents, enabling them to synchronize in order to achieve complex goals that require sequential executions. Agents in this system are capable of transferring information over the network where all others belong to. The approach was tested and verified using StarCraft II APIs, introduced by Blizzard and Google Deepmind.

[1]  Anand S. Rao,et al.  Modeling Rational Agents within a BDI-Architecture , 1997, KR.

[2]  Daniel Moldt,et al.  Goal Representation for BDI Agent Systems , 2004, PROMAS.

[3]  Manuela Veloso,et al.  Goal Transformations in Continuous Planning , 1998 .

[4]  Carlos José Pereira de Lucena,et al.  The Reflective Blackboard Pattern: Architecting Large Multi-agent Systems , 2002, SELMAS.

[5]  Scott A. DeLoach,et al.  A Goal Model for Adaptive Complex Systems , 2009 .

[6]  Chunyan Miao,et al.  Goal oriented modeling for intelligent software agents , 2004, Proceedings. IEEE/WIC/ACM International Conference on Intelligent Agent Technology, 2004. (IAT 2004)..

[7]  Tom Schaul,et al.  StarCraft II: A New Challenge for Reinforcement Learning , 2017, ArXiv.

[8]  Winfried Lamersdorf,et al.  Jadex: Implementing a BDI-Infrastructure for JADE Agents , 2003 .

[9]  Simon Buckingham Shum,et al.  Knowledge Representation with Ontologies: The Present and Future , 2004, IEEE Intell. Syst..

[10]  Scott A. DeLoach,et al.  A capabilities-based model for adaptive organizations , 2008, Autonomous Agents and Multi-Agent Systems.

[11]  Michael T. Cox,et al.  A Distributed Planning Approach Using Multiagent Goal Transformations , 2003, MAICS.

[12]  A. S. Roa,et al.  AgentSpeak(L): BDI agents speak out in a logical computable language , 1996 .

[13]  Hassan Mathkour,et al.  A multi-agent architecture for adaptive E-learning systems using a blackboard agent , 2009, 2009 2nd IEEE International Conference on Computer Science and Information Technology.

[14]  John F. Sowa,et al.  Knowledge representation: logical, philosophical, and computational foundations , 2000 .

[15]  Eric T. Matson,et al.  A natural language exchange model for enabling human, agent, robot and machine interaction , 2011, The 5th International Conference on Automation, Robotics and Applications.

[16]  Marcus J. Huber JAM: a BDI-theoretic mobile agent architecture , 1999, AGENTS '99.

[17]  Anand S. Rao,et al.  BDI Agents: From Theory to Practice , 1995, ICMAS.