Probabilistic Planning in AgentSpeak using the POMDP framework.

AgentSpeak is a logic-based programming language, based on the Belief-Desire-Intention paradigm, suitable for building complex agent-based systems. To limit the computational complexity, agents in AgentSpeak rely on a plan library to reduce the planning problem to the much simpler problem of plan selection. However, such a plan library is often inadequate when an agent is situated in an uncertain environment. In this work, we propose the \(\text {AgentSpeak}^+\) framework, which extends AgentSpeak with a mechanism for probabilistic planning. The beliefs of an \(\text {AgentSpeak}^+\) agent are represented using epistemic states to allow an agent to reason about its uncertain observations and the uncertain effects of its actions. Each epistemic state consists of a POMDP, used to encode the agent’s knowledge of the environment, and its associated probability distribution (or belief state). In addition, the POMDP is used to select the optimal actions for achieving a given goal, even when faced with uncertainty.

[1]  Weiru Liu,et al.  A framework for managing uncertain inputs: An axiomization of rewarding , 2011, Int. J. Approx. Reason..

[2]  Lin Padgham,et al.  First principles planning in BDI systems , 2009, AAMAS.

[3]  Weiru Liu,et al.  CAN(PLAN)+: Extending the Operational Semantics of the BDI Architecture to deal with Uncertain Information , 2014, UAI.

[4]  Michael Wooldridge,et al.  Programming Multi-Agent Systems in AgentSpeak using Jason (Wiley Series in Agent Technology) , 2007 .

[5]  Lluis Godo,et al.  A graded BDI agent model to represent and reason about preferences , 2011, Artif. Intell..

[6]  Marek J. Druzdzel,et al.  SMILE: Structural Modeling, Inference, and Learning Engine and GeNIE: A Development Environment for Graphical Decision-Theoretic Models , 1999, AAAI/IAAI.

[7]  John Langford,et al.  Probabilistic Planning in the Graphplan Framework , 1999, ECP.

[8]  R. Bellman A Markovian Decision Process , 1957 .

[9]  Eric A. Hansen,et al.  Solving POMDPs by Searching in Policy Space , 1998, UAI.

[10]  Nicholas R. Jennings,et al.  Agent-based control systems: Why are they suited to engineering complex systems? , 2003 .

[11]  Simon Parsons,et al.  On representing planning domains under uncertainty , 2010 .

[12]  Leslie Pack Kaelbling,et al.  Planning and Acting in Partially Observable Stochastic Domains , 1998, Artif. Intell..

[13]  Weiru Liu,et al.  Incorporating PGMs into a BDI Architecture , 2013, PRIMA.

[14]  Michael Luck,et al.  Declarative planning in procedural agent architectures , 2013, Expert Syst. Appl..

[15]  S.D.J. McArthur,et al.  Multi-Agent Systems for Power Engineering Applications—Part I: Concepts, Approaches, and Technical Challenges , 2007, IEEE Transactions on Power Systems.

[16]  Joel Veness,et al.  Monte-Carlo Planning in Large POMDPs , 2010, NIPS.

[17]  Gerardo I. Simari On Approximating the Best Decision for an Autonomous Agent , 2004 .

[18]  Nicholas R. Jennings,et al.  Agent-based control systems , 2003 .

[19]  A. S. Roa,et al.  AgentSpeak(L): BDI agents speak out in a logical computable language , 1996 .

[20]  Anand S. Rao,et al.  An Abstract Architecture for Rational Agents , 1992, KR.

[21]  Winfried Lamersdorf,et al.  Jadex: Implementing a BDI-Infrastructure for JADE Agents , 2003 .

[22]  Lin Padgham,et al.  A BDI agent programming language with failure handling, declarative goals, and planning , 2011, Autonomous Agents and Multi-Agent Systems.

[23]  Michael Wooldridge,et al.  On Partially Observable MDPs and BDI Models , 2002, Foundations and Applications of Multi-Agent Systems.

[24]  Gerardo I. Simari,et al.  On the relationship between MDPs and the BDI architecture , 2006, AAMAS '06.

[25]  Kristian G. Olesen,et al.  HUGIN - A Shell for Building Bayesian Belief Universes for Expert Systems , 1989, IJCAI.