The use of agent technology for building complex systems is increasing, and there are compelling reasons to use this technology. Benfield [1] showed a productivity gain of over 300% using a BDI (Belief Desire Intention) agent approach, while other work calculated that a very modest plan and goal structure provides well over a million ways to achieve a given goal, providing enormous flexibility in a modular manner. However the complexity of the systems that can be built using this technology, does create concerns about how to verify and validate their correctness. In this paper we describe briefly an approach and tool to assist in comprehensive automated unit testing within a BDI agent system. While this approach can never guarantee program correctness, comprehensive testing certainly increases confidence that there are no major problems. The fact that we automate both test case generation, as well as execution, greatly increases the likelihood that the testing will be done in a comprehensive manner. Given the enormous number of possible executions of even a single goal, it is virtually impossible to attempt to test all program traces. Once interleaved goals within an agent, or interactions between agents are considered, comprehensive testing of all executing becomes clearly impossible. Instead, we focus on testing of the basic units of the agent program the beliefs, plans and events (or messages). Our approach is to ascertain that no matter what the input variables to an entity, or the environment conditions which the entity may rely on, the entity behaves "as expected" (obtained from design artefacts, produced as part of an agent design methodology). We build on previous work [6] which described a basic architecture and approach. In this work we address some of the details of setting up the environment that is necessary to effectively realise that approach. More specifically, mechanisms to specify the initialization procedures for a given unit, variable assignment to execute test cases, and managing any interaction with external entities. The testing tool and approach described has been implemented within PDT1, relying on the implemented agent system being in JACK2. The testing process as described in [6] is as follows:
[1]
Robert V. Binder,et al.
Testing Object-Oriented Systems: Models, Patterns, and Tools
,
1999
.
[2]
Ilene Burnstein,et al.
Practical Software Testing
,
2003,
Springer Professional Computing.
[3]
Stuart Reid,et al.
The Art of Software Testing, Second edition. Glenford J. Myers. Revised and updated by Tom Badgett and Todd M. Thomas, with Corey Sandler. John Wiley and Sons, New Jersey, USA, 2004, ISBN 0-471-46912-2
,
2005,
Softw. Test. Verification Reliab..
[4]
Steve S. Benfield,et al.
Making a strong business case for multiagent technology
,
2006,
AAMAS '06.
[5]
Lin Padgham,et al.
Automated Unit Testing for Agent Systems
,
2007,
ENASE.