To Call, or Not to Call: Contrasting Direct and Indirect Branch Coverage in Test Generation

While adequacy criteria offer an end-point for testing, they do not mandate how targets are covered. Branch Coverage may be attained through direct calls to methods, or through indirect calls between methods. Automated generation is biased towards the rapid gains offered by indirect coverage. Therefore, even with the same end-goal, humans and automation produce very different tests. Direct coverage may yield tests that are more understandable, and that detect faults missed by traditional approaches. However, the added burden for the generation framework may result in lower coverage and faults that emerge through method interactions may be missed. To compare the two approaches, we have generated test suites for both, judging efficacy against real faults. We have found that requir- ing direct coverage results in lower achieved coverage and likelihood of fault detection. However, both forms of Branch Coverage cover code and detect faults that the other does not. By isolating methods, Direct Branch Coverage is less constrained in the choice of input. However, traditional Branch Coverage is able to leverage method interactions to discover faults. Ultimately, both are situationally applicable within the context of a broader testing strategy.

[1]  Reid Holmes,et al.  Coverage is not strongly correlated with test suite effectiveness , 2014, ICSE.

[2]  Gordon Fraser,et al.  Does Automated Unit Test Generation Really Help Software Testers? A Controlled Empirical Study , 2015, ACM Trans. Softw. Eng. Methodol..

[3]  Gordon Fraser,et al.  Combining Multiple Coverage Criteria in Search-Based Unit Test Generation , 2015, SSBSE.

[4]  Michael D. Ernst,et al.  Defects4J: a database of existing faults to enable controlled testing studies for Java programs , 2014, ISSTA 2014.

[5]  Gregory Gay,et al.  The Fitness Function for the Job: Search-Based Generation of Test Suites That Detect Real Faults , 2017, 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST).

[6]  Alex Groce,et al.  Coverage and Its Discontents , 2014, Onward!.

[7]  Mauro Pezzè,et al.  Software testing and analysis - process, principles and techniques , 2007 .

[8]  Sheeva Afshan,et al.  Evolving Readable String Test Inputs Using a Natural Language Model to Reduce Human Oracle Cost , 2013, 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation.

[9]  F. Wilcoxon Individual Comparisons by Ranking Methods , 1945 .

[10]  Phil McMinn,et al.  Search‐based software test data generation: a survey , 2004, Softw. Test. Verification Reliab..

[11]  Gordon Fraser,et al.  Do Automatically Generated Unit Tests Find Real Faults? An Empirical Study of Effectiveness and Challenges (T) , 2015, 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE).

[12]  Gordon Fraser,et al.  An Industrial Evaluation of Unit Test Generation: Finding Real Faults in a Financial Application , 2017, 2017 IEEE/ACM 39th International Conference on Software Engineering: Software Engineering in Practice Track (ICSE-SEIP).