Enhancing Software Testing by Judicious Use of Code Coverage Information

Recently, tools for the analysis and visualization of code coverage have become widely available. At first glance, their value in assessing and improving the quality of automated test suites seems to be obvious. Yet, experimental studies as well as experience from projects in industry indicate that their use is not without pitfalls. We found these tools in a number of recent projects quite beneficial. Therefore, we set out to gather code coverage information from one of these projects. In this experience report, first the system under scrutiny as well as our methodology is described. Then, four major questions concerning the impact and benefits of using these tools are discussed. Furthermore, a list of ten lessons learned is derived. The list may help developers judiciously use code coverage tools, in order to reap a maximum of benefits.

[1]  Thomas J. Ostrand,et al.  Experiments on the effectiveness of dataflow- and control-flow-based test adequacy criteria , 1994, Proceedings of 16th International Conference on Software Engineering.

[2]  Mary Shaw,et al.  Software architecture - perspectives on an emerging discipline , 1996 .

[3]  Gregg Rothermel,et al.  How well do professional developers test with code coverage visualizations? An empirical study , 2005, 2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05).

[4]  S. Berner,et al.  Observations and lessons learned from automated testing , 2005, Proceedings. 27th International Conference on Software Engineering, 2005. ICSE 2005..

[5]  N. Nagappan,et al.  Use of relative code churn measures to predict system defect density , 2005, Proceedings. 27th International Conference on Software Engineering, 2005. ICSE 2005..

[6]  Rudolf K. Keller,et al.  Fit for Change: Steps towards Effective Software Maintenance , 2005, ICSM.

[7]  Orit Hazzan,et al.  Agile software testing in a large-scale project , 2006, IEEE Software.