Simulation-based test adequacy criteria for distributed systems

Developers of distributed systems routinely construct discrete-event simulations to help understand and evaluate the behavior of inter-component protocols. Simulations are abstract models of systems and their environments, capturing basic algorithmic functionality at the same time as they focus attention on properties critical to distribution, including topology, timing, bandwidth, and overall scalability. We claim that simulations can be treated as a form of specification, and thereby used within a specification-based testing regime to provide developers with a rich new basis for defining and applying system-level test adequacy criteria. We describe a framework for evaluating distributed system test adequacy criteria, and demonstrate our approach on simulations and implementations of three distributed systems, including DNS, the Domain Name System.

[1]  Ian Clarke,et al.  Protecting Free Expression Online with Freenet , 2002, IEEE Internet Comput..

[2]  Gregg Rothermel,et al.  An experimental determination of sufficient mutant operators , 1996, TSEM.

[3]  Simone do Rocio Senger de Souza,et al.  Mutation Testing Applied to Estelle Specifications , 2004, Software Quality Journal.

[4]  Alexander L. Wolf,et al.  Simulation-Based Testing of Distributed Systems , 2006 .

[5]  Phyllis G. Frankl,et al.  An Experimental Comparison of the Effectiveness of Branch Testing and Data Flow Testing , 1993, IEEE Trans. Software Eng..

[6]  Debra J. Richardson,et al.  An Analysis of Test Data Selection Criteria Using the RELAY Model of Fault Detection , 1993, IEEE Trans. Software Eng..

[7]  David R. Karger,et al.  Chord: a scalable peer-to-peer lookup protocol for internet applications , 2003, TNET.

[8]  Paulo César Masiero,et al.  Mutation testing applied to validate specifications based on statecharts , 1999, Proceedings 10th International Symposium on Software Reliability Engineering (Cat. No.PR00443).

[9]  Richard J. Lipton,et al.  Hints on Test Data Selection: Help for the Practicing Programmer , 1978, Computer.

[10]  Marc J. Balcer,et al.  The category-partition method for specifying and generating fuctional tests , 1988, CACM.

[11]  Elaine J. Weyuker,et al.  Automatically Generating Test Data from a Boolean Specification , 1994, IEEE Trans. Software Eng..

[12]  Keith W. Ross,et al.  Computer networking - a top-down approach featuring the internet , 2000 .

[13]  Alexander L. Wolf,et al.  Acm Sigsoft Software Engineering Notes Vol 17 No 4 Foundations for the Study of Software Architecture , 2022 .

[14]  David R. Karger,et al.  Chord: A scalable peer-to-peer lookup service for internet applications , 2001, SIGCOMM '01.

[15]  Phyllis G. Frankl,et al.  Further empirical studies of test effectiveness , 1998, SIGSOFT '98/FSE-6.

[16]  Michael D. Ernst,et al.  Improving test suites via operational abstraction , 2003, 25th International Conference on Software Engineering, 2003. Proceedings..

[17]  A. Jefferson Offutt,et al.  MuJava: an automated class mutation system , 2005, Softw. Test. Verification Reliab..

[18]  Eric R. Ziegel,et al.  Probability and Statistics for Engineering and the Sciences , 2004, Technometrics.

[19]  A. Jefferson Offutt,et al.  Deriving tests from software architectures , 2001, Proceedings 12th International Symposium on Software Reliability Engineering.

[20]  Alexandre Petrenko,et al.  Protocol testing: review of methods and relevance for software testing , 1994, ISSTA '94.

[21]  Michael D. Rice,et al.  An approach to architectural analysis and testing , 1998, ISAW '98.

[22]  Paul Ammann,et al.  A specification-based coverage metric to evaluate test sets , 1999, Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering.

[23]  Debra J. Richardson,et al.  Structural specification-based testing: automated support and experimental evaluation , 1999, ESEC/FSE-7.

[24]  Yanyan Wang,et al.  Automating experimentation on distributed testbeds , 2005, ASE.

[25]  Elaine J. Weyuker,et al.  An Applicable Family of Data Flow Testing Criteria , 1988, IEEE Trans. Software Eng..

[26]  Henry Muccini,et al.  Using software architecture for code testing , 2004, IEEE Transactions on Software Engineering.

[27]  Shaoying Liu,et al.  Generating test data from state‐based specifications , 2003, Softw. Test. Verification Reliab..

[28]  Mark Allman,et al.  On the effective evaluation of TCP , 1999, CCRV.

[29]  Debra J. Richardson,et al.  Approaches to specification-based testing , 1989, TAV3.

[30]  Richard H. Carver,et al.  Use of sequencing constraints for specification-based testing of concurrent programs , 1998 .

[31]  Paul Ammann,et al.  A SPECIFICATION-BASED COVERAGE METRIC TO EVALUATE TEST SETS , 2001 .

[32]  Shaoying Liu,et al.  Generating test data from SOFL specifications , 1999, J. Syst. Softw..

[33]  Alexander L. Wolf,et al.  Software testing at the architectural level , 1996, ISAW '96.

[34]  Tatsuhiro Tsuchiya,et al.  On fault classes and error detection capability of specification-based testing , 2002, TSEM.

[35]  Lionel C. Briand,et al.  Using simulation to empirically investigate test coverage criteria based on statechart , 2004, Proceedings. 26th International Conference on Software Engineering.