1333 Model based Testing for Agent Systems ( Extended Abstract )

The use of agent technology for building complex systems is increasing, and there are compelling reasons to use this technology. Benfield [1] showed a productivity gain of over 300% using a BDI (Belief Desire Intention) agent approach, while other work calculated that a very modest plan and goal structure provides well over a million ways to achieve a given goal, providing enormous flexibility in a modular manner. However the complexity of the systems that can be built using this technology, does create concerns about how to verify and validate their correctness. In this paper we describe briefly an approach and tool to assist in comprehensive automated unit testing within a BDI agent system. While this approach can never guarantee program correctness, comprehensive testing certainly increases confidence that there are no major problems. The fact that we automate both test case generation, as well as execution, greatly increases the likelihood that the testing will be done in a comprehensive manner. Given the enormous number of possible executions of even a single goal, it is virtually impossible to attempt to test all program traces. Once interleaved goals within an agent, or interactions between agents are considered, comprehensive testing of all executing becomes clearly impossible. Instead, we focus on testing of the basic units of the agent program the beliefs, plans and events (or messages). Our approach is to ascertain that no matter what the input variables to an entity, or the environment conditions which the entity may rely on, the entity behaves "as expected" (obtained from design artefacts, produced as part of an agent design methodology). We build on previous work [6] which described a basic architecture and approach. In this work we address some of the details of setting up the environment that is necessary to effectively realise that approach. More specifically, mechanisms to specify the initialization procedures for a given unit, variable assignment to execute test cases, and managing any interaction with external entities. The testing tool and approach described has been implemented within PDT1, relying on the implemented agent system being in JACK2. The testing process as described in [6] is as follows: