Software Test Data Generation using Ant Colony Optimization

Software testing remains the primary technique used to gain consumers' confidence in the software. The process of testing any software system is an enormous task which is time- consuming and costly (1). The development of techniques that will also support the automation of software testing will result in significant cost savings. The application of artificial intelligence (AI) techniques in Software Engineering (SE) is an emerging area of research that brings about the cross- fertilization of ideas across two domains. A number of published works, for examples (2) and (12), have begun to examine the effective use of AI for SE related activities which are inherently knowledge intensive and human-centred. It has been identified that one of the SE areas with a more prolific use of AI techniques is software testing. The focus of these techniques involves the applications of genetic algorithms (GAs), for examples (8) and (10). Other AI techniques used for test data generation included the AI planner approach (7) and simulated annealing (13). Recently, Ant Colony Optimization (ACO) is starting to be applied in software testing (3, 10). Namely Boerner and Gutjahr (3) described an approach involving ACO and a Markov Software Usage model for deriving a set of test paths for a software system, and McMinn and Holcombe (10) reported on the application of ACO as a supplementary optimization stage for finding sequences of transitional statements in generating test data for evolutionary testing. However, the results obtained so far are preliminary, and none of the reported results directly addresses specification-based software testing.

[1]  Lionel C. Briand On the many ways software engineering can benefit from knowledge engineering , 2002, SEKE '02.

[2]  Claude Caci,et al.  Testing object-oriented systems , 2000, SOEN.

[3]  Venansius Baryamureeba,et al.  PROCEEDINGS OF WORLD ACADEMY OF SCIENCE, ENGINEERING AND TECHNOLOGY, VOL 8 , 2005 .

[4]  M. Dorigo,et al.  1 Positive Feedback as a Search Strategy , 1991 .

[5]  Michael R. Lyu,et al.  Achieving software quality with testing coverage measures , 1994, Computer.

[6]  John A. Clark,et al.  A search-based automated test-data generation framework for safety-critical systems , 2002 .

[7]  Israel A. Wagner,et al.  ANTS: Agents on Networks, Trees, and Subgraphs , 2000, Future Gener. Comput. Syst..

[8]  Roy P. Pargas,et al.  Test‐data generation using genetic algorithms , 1999 .

[9]  Marco Dorigo,et al.  Ant system: optimization by a colony of cooperating agents , 1996, IEEE Trans. Syst. Man Cybern. Part B.

[10]  Phil McMinn,et al.  Search‐based software test data generation: a survey , 2004, Softw. Test. Verification Reliab..

[11]  Witold Pedrycz,et al.  Computational intelligence in software engineering , 1997, CCECE '97. Canadian Conference on Electrical and Computer Engineering. Engineering Innovation: Voyage of Discovery. Conference Proceedings.

[12]  Karl F. Doerner,et al.  Extracting Test Sequences from a Markov Software Usage Model by ACO , 2003, GECCO.

[13]  Robert V. Binder,et al.  Testing Object-Oriented Systems: Models, Patterns, and Tools , 1999 .

[14]  Phil McMinn,et al.  The State Problem for Evolutionary Testing , 2003, GECCO.

[15]  Adele E. Howe,et al.  Test Case Generation as an AI Planning Problem , 2004, Automated Software Engineering.