Enabledness-based Testing of Object Protocols

A significant proportion of classes in modern software introduce or use object protocols, prescriptions on the temporal orderings of method calls on objects. This article studies search-based test generation techniques that aim to exploit a particular abstraction of object protocols (enabledness preserving abstractions (EPAs)) to find failures. We define coverage criteria over an extension of EPAs that includes abnormal method termination and define a search-based test case generation technique aimed at achieving high coverage. Results suggest that the proposed case generation technique with a fitness function that aims at combined structural and extended EPA coverage can provide better failure-detection capabilities not only for protocol failures but also for general failures when compared to random testing and search-based test generation for standard structural coverage.

[1]  Gordon Fraser,et al.  Achieving scalable mutation-based generation of whole test suites , 2015, Empirical Software Engineering.

[2]  L WolfAlexander,et al.  Evaluating Test Suites and Adequacy Criteria Using Simulation-Based Models of Distributed Systems , 2008 .

[3]  Gordon Fraser,et al.  Testing Container Classes: Random or Systematic? , 2011, FASE.

[4]  Rance Cleaveland,et al.  Using formal specifications to support testing , 2009, CSUR.

[5]  Leonardo Mariani,et al.  AutoBlackTest: a tool for automatic black-box testing , 2011, 2011 33rd International Conference on Software Engineering (ICSE).

[6]  Jonathan Aldrich,et al.  An Empirical Study of Object Protocols in the Wild , 2011, ECOOP.

[7]  Gordon Fraser,et al.  Random or evolutionary search for object‐oriented test suite generation? , 2018, Softw. Test. Verification Reliab..

[8]  Michael D. Ernst,et al.  Are mutants a valid substitute for real faults in software testing? , 2014, SIGSOFT FSE.

[9]  Gordon Fraser,et al.  A detailed investigation of the effectiveness of whole test suite generation , 2017, Empirical Software Engineering.

[10]  Teemu Kanstrén,et al.  Using Built-In Domain-Specific Modeling Support to Guide Model-Based Test Generation , 2012, MBT.

[11]  Jonathan Aldrich,et al.  Modular typestate checking of aliased objects , 2007, OOPSLA.

[12]  Paolo Tonella,et al.  Automated Test Case Generation as a Many-Objective Optimisation Problem with Dynamic Selection of the Targets , 2018, IEEE Transactions on Software Engineering.

[13]  Yvan Labiche,et al.  A systematic review of state-based test tools , 2013, International Journal on Software Tools for Technology Transfer.

[14]  Kalyanmoy Deb,et al.  A fast and elitist multiobjective genetic algorithm: NSGA-II , 2002, IEEE Trans. Evol. Comput..

[15]  Yuriy Brun,et al.  Automatic mining of specifications from invocation traces and method invariants , 2014, SIGSOFT FSE.

[16]  Bruno Legeard,et al.  A taxonomy of model‐based testing approaches , 2012, Softw. Test. Verification Reliab..

[17]  Gordon Fraser,et al.  Generating TCP/UDP network data for automated unit test generation , 2015, ESEC/SIGSOFT FSE.

[18]  Gordon Fraser,et al.  Private API Access and Functional Mocking in Automated Unit Test Generation , 2017, 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST).

[19]  Paul Pettersson,et al.  Chapter Three - A Research Overview of Tool-Supported Model-based Testing of Requirements-based Designs , 2015, Adv. Comput..

[20]  Massimiliano Di Penta,et al.  Assessing, Comparing, and Combining State Machine-Based Testing and Structural Testing: A Series of Experiments , 2011, IEEE Transactions on Software Engineering.

[21]  Porfirio Tramontana,et al.  MobiGUITAR: Automated Model-Based Testing of Mobile Apps , 2015, IEEE Software.

[22]  Sebastián Uchitel,et al.  Behaviour abstraction adequacy criteria for API call protocol testing , 2016, Softw. Test. Verification Reliab..

[23]  Wolfgang Grieskamp,et al.  Model‐based quality assurance of protocol documentation: tools and methodology , 2011, Softw. Test. Verification Reliab..

[24]  Tao Xie,et al.  Improving Structural Testing of Object-Oriented Programs via Integrating Evolutionary Testing and Symbolic Execution , 2008, 2008 23rd IEEE/ACM International Conference on Automated Software Engineering.

[25]  Gregg Rothermel,et al.  Software testing: a research travelogue (2000–2014) , 2014, FOSE.

[26]  Henry Coles,et al.  Demo: PIT a Practical Mutation Testing Tool for Java , .

[27]  Lionel C. Briand,et al.  A practical guide for using statistical tests to assess randomized algorithms in software engineering , 2011, 2011 33rd International Conference on Software Engineering (ICSE).

[28]  Margus Veanes,et al.  Generating finite state machines from abstract state machines , 2002, ISSTA '02.

[29]  Sebastián Uchitel,et al.  Automated Abstractions for Contract Validation , 2012, IEEE Transactions on Software Engineering.

[30]  Sriram K. Rajamani,et al.  Automatically validating temporal safety properties of interfaces , 2001, SPIN '01.

[31]  Nandamudi Lankalapalli Vijaykumar,et al.  A Practical Approach for Automated Test Case Generation using Statecharts , 2006, 30th Annual International Computer Software and Applications Conference (COMPSAC'06).

[32]  Gordon Fraser,et al.  Automated unit test generation for classes with environment dependencies , 2014, ASE.

[33]  Alex Groce,et al.  Swarm testing , 2012, ISSTA 2012.

[34]  Robert DeLine,et al.  Typestates for Objects , 2004, ECOOP.

[35]  Richard Torkar,et al.  Overcoming the Equivalent Mutant Problem: A Systematic Literature Review and a Comparative Experiment of Second Order Mutation , 2014, IEEE Transactions on Software Engineering.

[36]  Paolo Tonella,et al.  Evolutionary testing of classes , 2004, ISSTA '04.

[37]  Thomas R. Gross,et al.  A framework for the evaluation of specification miners based on finite state machines , 2010, 2010 IEEE International Conference on Software Maintenance.

[38]  Sebastián Uchitel,et al.  Enabledness-based program abstractions for behavior validation , 2013, TSEM.

[39]  Andreas Zeller,et al.  Exploiting Common Object Usage in Test Case Generation , 2011, 2011 Fourth IEEE International Conference on Software Testing, Verification and Validation.

[40]  Sebastián Uchitel,et al.  Program abstractions for behaviour validation , 2011, 2011 33rd International Conference on Software Engineering (ICSE).

[41]  Arend Rensink,et al.  Compositional Testing with ioco , 2003, FATES.

[42]  Michael D. Ernst,et al.  Feedback-Directed Random Test Generation , 2007, 29th International Conference on Software Engineering (ICSE'07).

[43]  Armin Biere,et al.  Modbat: A Model-Based API Tester for Event-Driven Systems , 2013, Haifa Verification Conference.

[44]  Lionel C. Briand,et al.  Using Mutation Analysis for Assessing and Comparing Testing Coverage Criteria , 2006, IEEE Transactions on Software Engineering.

[45]  Bertrand Meyer,et al.  Using Contracts and Boolean Queries to Improve the Quality of Automatic Test Generation , 2007, TAP.

[46]  Shinichi Nakagawa A farewell to Bonferroni: the problems of low statistical power and publication bias , 2004, Behavioral Ecology.

[47]  Sebastián Uchitel,et al.  Validation of contracts using enabledness preserving finite state abstractions , 2009, 2009 IEEE 31st International Conference on Software Engineering.

[48]  Paolo Tonella,et al.  State-Based Testing of Ajax Web Applications , 2008, 2008 1st International Conference on Software Testing, Verification, and Validation.

[49]  Franck Le Gall,et al.  A Survey on Model-Based Testing Tools for Test Case Generation , 2017 .

[50]  T. Perneger What's wrong with Bonferroni adjustments , 1998, BMJ.

[51]  Robert E. Strom,et al.  Typestate: A programming language concept for enhancing software reliability , 1986, IEEE Transactions on Software Engineering.

[52]  Gregg Rothermel,et al.  An empirical study of the effects of minimization on the fault detection capabilities of test suites , 1998, Proceedings. International Conference on Software Maintenance (Cat. No. 98CB36272).

[53]  Paolo Tonella,et al.  Reformulating Branch Coverage as a Many-Objective Optimization Problem , 2015, 2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST).

[54]  Lionel C. Briand,et al.  A Hitchhiker's guide to statistical tests for assessing randomized algorithms in software engineering , 2014, Softw. Test. Verification Reliab..

[55]  Dianxiang Xu,et al.  An Automated Test Generation Technique for Software Quality Assurance , 2015, IEEE Transactions on Reliability.

[56]  Stephan Weißleder Simulated Satisfaction of Coverage Criteria on UML State Machines , 2010, 2010 Third International Conference on Software Testing, Verification and Validation.

[57]  A. Jefferson Offutt,et al.  Mutation analysis using mutant schemata , 1993, ISSTA '93.

[58]  Eran Yahav,et al.  Typestate verification: Abstraction techniques and complexity results , 2005, Sci. Comput. Program..

[59]  Arie van Deursen,et al.  Crawling AJAX by Inferring User Interface State Changes , 2008, 2008 Eighth International Conference on Web Engineering.

[60]  Mark Harman,et al.  n empirical investigation into branch coverage for C programs using CUTE and USTIN , 2010 .

[61]  Michael D. Ernst,et al.  Defects4J: a database of existing faults to enable controlled testing studies for Java programs , 2014, ISSTA 2014.

[62]  Jan Tretmans,et al.  Conformance Testing with Labelled Transition Systems: Implementation Relations and Test Generation , 1996, Comput. Networks ISDN Syst..

[63]  Andreas Zeller,et al.  Automatically Generating Test Cases for Specification Mining , 2012, IEEE Transactions on Software Engineering.

[64]  Andreas Zeller,et al.  Mutation-Driven Generation of Unit Tests and Oracles , 2010, IEEE Transactions on Software Engineering.

[65]  David Lee,et al.  Principles and methods of testing finite state machines-a survey , 1996, Proc. IEEE.

[66]  Phil McMinn,et al.  Search‐based software test data generation: a survey , 2004, Softw. Test. Verification Reliab..

[67]  Gordon Fraser,et al.  Combining Multiple Coverage Criteria in Search-Based Unit Test Generation , 2015, SSBSE.

[68]  Mats Per Erik Heimdahl,et al.  Specification test coverage adequacy criteria = specification test generation inadequacy criteria , 2004, Eighth IEEE International Symposium on High Assurance Systems Engineering, 2004. Proceedings..

[69]  Eran Yahav,et al.  Effective typestate verification in the presence of aliasing , 2006, TSEM.

[70]  Lalit M. Patnaik,et al.  Genetic algorithms: a survey , 1994, Computer.

[71]  Tim Menzies,et al.  Genetic Algorithms for Randomized Unit Testing , 2011, IEEE Transactions on Software Engineering.

[72]  Gordon Fraser,et al.  Do Automatically Generated Unit Tests Find Real Faults? An Empirical Study of Effectiveness and Challenges (T) , 2015, 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE).

[73]  Nikolai Tillmann,et al.  Precise identification of problems for structural test generation , 2011, 2011 33rd International Conference on Software Engineering (ICSE).

[74]  Ivan Beschastnikh,et al.  Synergizing Specification Miners through Model Fissions and Fusions (T) , 2015, 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE).

[75]  Florin Dumitriu,et al.  Model-Driven Engineering of Information Systems : Principles, Techniques, and Practice , 2014 .

[76]  Alexander Pretschner,et al.  One evaluation of model-based testing and its automation , 2005, ICSE.

[77]  Zhendong Su,et al.  Synthesizing method sequences for high-coverage testing , 2011, OOPSLA '11.

[78]  Sophia Drossopoulou,et al.  Session Types for Object-Oriented Languages , 2006, ECOOP.

[79]  GORDON FRASER,et al.  A Large-Scale Evaluation of Automated Unit Test Generation Using EvoSuite , 2014, ACM Trans. Softw. Eng. Methodol..

[80]  Gordon Fraser,et al.  An empirical evaluation of evolutionary algorithms for unit test suite generation , 2018, Inf. Softw. Technol..

[81]  Gordon Fraser,et al.  Whole Test Suite Generation , 2013, IEEE Transactions on Software Engineering.

[82]  Mary Lou Soffa,et al.  Coverage criteria for GUI testing , 2001, ESEC/FSE-9.