Identifying the relevant information for software testing technique selection

One of the major problems within the software testing area is how to get a suitable set of test cases to test a software system. This set should assure maximum effectiveness with the least possible number of test cases. There are nowadays numerous testing techniques available for generating test cases. However, many of them are never used, while a few are used over and over again. Testers have little (if any) information about the available techniques, their usefulness and, generally, how suited they are to the project at hand. This lack of information means less tuned decisions on which testing techniques to use. This paper presents the results of developing an artefact (called a characterisation schema) to assist with testing technique selection. When instantiated for a variety of techniques, the schema provides developers with a catalogue containing enough information for them to select the best suited techniques for a given project. The schema, and its associated catalogue, assure that the decisions developers make are based on grounded knowledge of the techniques rather than on perceptions, suppositions and assumptions.

[1]  Scott Henninger,et al.  Accelerating the successful reuse of problem solving knowledge through the domain lifecycle , 1996, Proceedings of Fourth IEEE International Conference on Software Reuse.

[2]  Victor R. Basili,et al.  Support for comprehensive reuse , 1991, Softw. Eng. J..

[3]  Andreas Birk,et al.  Modelling the application domains of software engineering technologies , 1997, Proceedings 12th IEEE International Conference Automated Software Engineering.

[4]  Natalia Juristo Juzgado,et al.  Basics of Software Engineering Experimentation , 2010, Springer US.

[5]  Neil A. M. Maiden,et al.  ACRE: selecting methods for requirements acquisition , 1996, Softw. Eng. J..

[6]  Mary Jean Harrold,et al.  Testing: a roadmap , 2000, ICSE '00.

[7]  Natalia Juristo Juzgado,et al.  A survey on testing technique empirical studies: how limited is our knowledge , 2002, Proceedings International Symposium on Empirical Software Engineering.

[8]  Glenford J. Myers,et al.  Art of Software Testing , 1979 .

[9]  Victor R. Basili,et al.  Defining factors, goals and criteria for reusable component evaluation , 1996, CASCON.

[10]  Gregory Tassey,et al.  Prepared for what , 2007 .