Muteria: An Extensible and Flexible Multi-Criteria Software Testing Framework

Program based test adequacy criteria (TAC), such as statement, branch coverage and mutation give objectives for software testing. Many techniques and tools have been developed to improve each phase of the TAC-based software testing process. Nonetheless, The engineering effort required to integrate these tools and techniques into the software testing process limits their use and creates an overhead to the users. Especially for system testing with languages like C, where test cases are not always well structured in a framework. In response to these challenges, this paper presents Muteria, a TAC-based software testing framework. Muteria enables the integration of multiple software testing tools. Muteria abstracts each phase of the TAC-based software testing process to provide tool drivers interfaces for the implementation of tool drivers. Tool drivers enable Muteria to call the corresponding tools during the testing process. An initial set of drivers for KLEE, Shadow and SEMu test-generation tools, Gcov, and coverage.py code coverage tools, and Mart mutant generation tool for C and Python programming language were implemented with an average of 345 lines of Python code. Moreover, the user configuration file required to measure code coverage and mutation score on a sample C programs, using the Muteria framework, consists of less than 15 configuration variables. Users of the Muteria framework select, in a configuration file, the tools and TACs to measure. The Muteria framework uses the user configuration to run the testing process and report the outcome. Users interact with Muteria through its Application Programming Interface and Command Line Interface. Muteria can benefit to researchers as a laboratory to execute experiments, and to software practitioners.

[1]  D. L. Parnas,et al.  On the criteria to be used in decomposing systems into modules , 1972, Software Pioneers.

[2]  Dawson R. Engler,et al.  KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs , 2008, OSDI.

[3]  Hironori Washizaki,et al.  Open Code Coverage Framework: A Consistent and Flexible Framework for Measuring Test Coverage Supporting Multiple Programming Languages , 2010, 2010 10th International Conference on Quality Software.

[4]  A. Jefferson Offutt,et al.  A mutation carol: Past, present and future , 2011, Inf. Softw. Technol..

[5]  Sang-Woon Kim,et al.  Combining weak and strong mutation for a noninterpretive Java mutation system , 2013, Softw. Test. Verification Reliab..

[6]  Cristian Cadar,et al.  Shadow of a Doubt: Testing for Divergences between Software Versions , 2016, 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE).

[7]  Yves Le Traon,et al.  An Empirical Study on Mutation, Statement and Branch Coverage Fault Revelation That Avoids the Unreliable Clean Program Assumption , 2017, 2017 IEEE/ACM 39th International Conference on Software Engineering (ICSE).

[8]  Goran Petrovic,et al.  State of Mutation Testing at Google , 2017, 2018 IEEE/ACM 40th International Conference on Software Engineering: Software Engineering in Practice Track (ICSE-SEIP).

[9]  Yves Le Traon,et al.  Mart: a mutant generation tool for LLVM , 2019, ESEC/SIGSOFT FSE.

[10]  Koushik Sen,et al.  Selecting fault revealing mutants , 2018, Empirical Software Engineering.

[11]  James B. Stryker,et al.  Move Fast and Break Things: Silicon Valley and the Language of Entrepreneurial Leadership , 2020 .

[12]  Yves Le Traon,et al.  Killing Stubborn Mutants with Symbolic Execution , 2020, ArXiv.