Test-suite reduction for model based tests: effects on test quality and implications for testing

Model checking techniques can be successfully employed as a test case generation technique to generate tests from formal models. The number of tests cases produced, however, is typically large for complex coverage criteria such as MCDC. Test-suite reduction can provide us with a smaller set of test cases that present the original coverage-often a dramatically smaller set. One potential drawback with test-suite reduction is that this might affect the quality of the test-suite in terms of fault finding. Previous empirical studies provide conflicting evidence on this issue. To further investigate the problem and determine its effect when testing formal models of software, we performed an experiment using a large case example of a flight guidance system, generated reduced test-suites for a variety of structural coverage criteria while presenting coverage, and recorded their fault finding effectiveness. Our results show that the size of the specification based test-suites can be dramatically reduced and that the fault detection of the reduced test-suites is adversely affected. In this report we describe our experiment, analyze the results, and discuss the implications for testing based on formal specifications.

[1]  Myla Archer,et al.  Using Abstraction and Model Checking to Detect Safety Violations in Requirements Specifications , 1998, IEEE Trans. Software Eng..

[2]  Michael Randolph Garey,et al.  Johnson: "computers and intractability , 1979 .

[3]  Keith E. Williamson,et al.  Test data generation and feasible path analysis , 1994, ISSTA '94.

[4]  P. Caspi,et al.  A methodology for proving control systems with Lustre and PVS , 1999, Dependable Computing for Critical Applications 7.

[5]  W. Eric Wong,et al.  Effect of test set minimization on fault detection effectiveness , 1998 .

[6]  Gregg Rothermel,et al.  An empirical study of the effects of minimization on the fault detection capabilities of test suites , 1998, Proceedings. International Conference on Software Maintenance (Cat. No. 98CB36272).

[7]  Steve Sims,et al.  TAME: A PVS Interface to Simplify Proofs for Automata Models , 1998 .

[8]  Nancy G. Leveson,et al.  Designing specification languages for process control systems: lessons learned and steps to the future , 1999, ESEC/FSE-7.

[9]  Shaoying Liu,et al.  Criteria for generating specification-based tests , 1999, Proceedings Fifth IEEE International Conference on Engineering of Complex Computer Systems (ICECCS'99) (Cat. No.PR00434).

[10]  Constance L. Heitmeyer,et al.  Automated consistency checking of requirements specifications , 1996, TSEM.

[11]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[12]  Nancy G. Leveson,et al.  Completeness and Consistency in Hierarchical State-Based Requirements , 1996, IEEE Trans. Software Eng..

[13]  Sanjai Rayadurgam,et al.  Coverage based test-case generation using model checkers , 2001, Proceedings. Eighth Annual IEEE International Conference and Workshop On the Engineering of Computer-Based Systems-ECBS 2001.

[14]  J Hayhurst Kelly,et al.  A Practical Tutorial on Modified Condition/Decision Coverage , 2001 .

[15]  Joseph Robert Horgan,et al.  Effect of Test Set Minimization on Fault Detection Effectiveness , 1995, 1995 17th International Conference on Software Engineering.

[16]  Orna Grumberg,et al.  Model checking and modular verification , 1994, TOPL.

[17]  Sanjai Rayadurgam,et al.  Auto-generating Test Sequences Using Model Checkers: A Case Study , 2003, FATES.

[18]  Mary Jean Harrold,et al.  Test-Suite Reduction and Prioritization for Modified Condition/Decision Coverage , 2003, IEEE Trans. Software Eng..

[19]  Luqi,et al.  Using Transformations in Specification-Based Prototyping , 1993, IEEE Trans. Software Eng..

[20]  David Notkin,et al.  Model checking large software specifications , 1996, SIGSOFT '96.

[21]  Mats Per Erik Heimdahl,et al.  Specification test coverage adequacy criteria = specification test generation inadequacy criteria , 2004, Eighth IEEE International Symposium on High Assurance Systems Engineering, 2004. Proceedings..

[22]  Fausto Giunchiglia,et al.  NUSMV: a new symbolic model checker , 2000, International Journal on Software Tools for Technology Transfer.

[23]  Constance L. Heitmeyer,et al.  SCR: a toolset for specifying and analyzing requirements , 1995, COMPASS '95 Proceedings of the Tenth Annual Conference on Computer Assurance Systems Integrity, Software Safety and Process Security'.

[24]  Joseph Robert Horgan,et al.  Test set size minimization and fault detection effectiveness: A case study in a space application , 1999, J. Syst. Softw..

[25]  Loe M. G. Feijs,et al.  Test Generation for Intelligent Networks Using Model Checking , 1997, TACAS.

[26]  Angelo Gargantini,et al.  Using model checking to generate tests from requirements specifications , 1999, ESEC/FSE-7.

[27]  Phyllis G. Frankl,et al.  An experimental comparison of the effectiveness of the all-uses and all-edges adequacy criteria , 1991, TAV4.

[28]  Edward A. Lee,et al.  Overview of the Ptolemy project , 2001 .

[29]  Amir Pnueli,et al.  Applications of Temporal Logic to the Specification and Verification of Reactive Systems: A Survey of Current Trends , 1986, Current Trends in Concurrency.

[30]  Mats Per Erik Heimdahl,et al.  Specification-based prototyping for embedded systems , 1999, ESEC/FSE-7.

[31]  Amnon Naamad,et al.  Statemate: a working environment for the development of complex reactive systems , 1988, ICSE '88.

[32]  Mats Per Erik Heimdahl,et al.  On the requirements of high-integrity code generation , 1999, Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering.

[33]  Mark Blackburn,et al.  Automatic generation of test vectors for SCR-style specifications , 1997, Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance.

[34]  Stephan Merz,et al.  Model Checking , 2000 .