Reducing the Cost of Model-Based Testing through Test Case Diversity

Model-based testing (MBT) suffers from two main problems which in many real world systems make MBT impractical: scalability and automatic oracle generation. When no automated oracle is available, or when testing must be performed on actual hardware or a restricted-access network, for example, only a small set of test cases can be executed and evaluated. However, MBT techniques usually generate large sets of test cases when applied to real systems, regardless of the coverage criteria. Therefore, one needs to select a small enough subset of these test cases that have the highest possible fault revealing power. In this paper, we investigate and compare various techniques for rewarding diversity in the selected test cases as a way to increase the likelihood of fault detection. We use a similarity measure defined on the representation of the test cases and use it in several algorithms that aim at maximizing the diversity of test cases. Using an industrial system with actual faults, we found that rewarding diversity leads to higher fault detection compared to the techniques commonly reported in the literature: coverage-based and random selection. Among the investigated algorithms, diversification using Genetic Algorithms is the most cost-effective technique.

[1]  Robert V. Binder,et al.  Testing Object-Oriented Systems: Models, Patterns, and Tools , 1999 .

[2]  David E. Goldberg,et al.  Genetic Algorithms in Search Optimization and Machine Learning , 1988 .

[3]  T. H. Tse,et al.  Adaptive Random Test Case Prioritization , 2009, 2009 IEEE/ACM International Conference on Automated Software Engineering.

[4]  Lionel C. Briand,et al.  An Industrial Investigation of Similarity Measures for Model-Based Test Case Selection , 2010, 2010 IEEE 21st International Symposium on Software Reliability Engineering.

[5]  David Leon,et al.  An Empirical Study of Test Case Filtering Techniques Based on Exercising Information Flows , 2007, IEEE Transactions on Software Engineering.

[6]  Goldberg,et al.  Genetic algorithms , 1993, Robust Control Systems with Genetic Algorithms.

[7]  Gregg Rothermel,et al.  Test Case Prioritization: A Family of Empirical Studies , 2002, IEEE Trans. Software Eng..

[8]  L. C. Briand,et al.  Model Transformations as a Strategy to Automate Model-Based Testing - A Tool and Industrial Case Studies, Version 1.0 , 2010 .

[9]  Suresh Jagannathan,et al.  PHALANX: a graph-theoretic framework for test case prioritization , 2008, SAC '08.

[10]  Mark Harman,et al.  Clustering test cases to achieve effective and scalable prioritisation incorporating expert knowledge , 2009, ISSTA.

[11]  Mark Harman,et al.  A Theoretical and Empirical Study of Search-Based Testing: Local, Global, and Hybrid Search , 2010, IEEE Transactions on Software Engineering.

[12]  Tsong Yueh Chen,et al.  Adaptive Random Testing: The ART of test case diversity , 2010, J. Syst. Softw..

[13]  Mark Harman,et al.  The Current State and Future of Search Based Software Engineering , 2007, Future of Software Engineering (FOSE '07).

[14]  Bertrand Meyer,et al.  ARTOO , 2008, 2008 ACM/IEEE 30th International Conference on Software Engineering.

[15]  Rui Xu,et al.  Survey of clustering algorithms , 2005, IEEE Transactions on Neural Networks.

[16]  Patrícia Duarte de Lima Machado,et al.  On the use of a similarity function for test case selection in the context of model‐based testing , 2011, Softw. Test. Verification Reliab..

[17]  Cheng-qing Ye,et al.  Test-Suite Reduction Using Genetic Algorithm , 2005, APPT.

[18]  Lionel C. Briand,et al.  A Systematic Review of the Application and Empirical Investigation of Search-Based Test Case Generation , 2010, IEEE Transactions on Software Engineering.

[19]  Lionel C. Briand,et al.  An enhanced test case selection approach for model-based testing: an industrial case study , 2010, FSE '10.

[20]  Alexandre Petrenko,et al.  Using String Distances for Test Case Prioritisation , 2009, 2009 IEEE/ACM International Conference on Automated Software Engineering.

[21]  Rodrigo Fernandes de Mello,et al.  A Technique to Reduce the Test Case Suites for Regression Testing Based on a Self-Organizing Neural Network Architecture , 2006, 30th Annual International Computer Software and Applications Conference (COMPSAC'06).

[22]  Mark Harman,et al.  Search Algorithms for Regression Test Case Prioritization , 2007, IEEE Transactions on Software Engineering.

[23]  Pankaj Jalote Future of Software Engineering , 2009, ICISTM.

[24]  David Leon,et al.  A comparison of coverage-based and distribution-based techniques for filtering and prioritizing test cases , 2003, 14th International Symposium on Software Reliability Engineering, 2003. ISSRE 2003..