Coverage discounting: A generalized approach for testbench qualification

In simulation-based validation, the detection of design errors requires both stimulus capable of activating the errors and checkers capable of detecting the behavior as erroneous. Validation coverage metrics tend to address only the sufficiency of a testbench's stimulus component, whereas fault insertion techniques focus on the testbench's checker component. In this paper we introduce “coverage discounting”, an analytical technique that combines the benefits of each approach, overcomes their respective shortcomings, and provides significantly more information than performing both tasks separately. The proposed approach can be used with any functional coverage metric (including, and ideally, user defined covergroups and bins), and a variety of fault models and insertion mechanisms. We present an experimental case study where the proposed approach is used to evaluate functional and pseudofunctional tests for a microprocessor. The simulation efficiency is improved through the use of an instruction set simulator, which has been instrumented to record functional coverage information as well as insert faults according to an ad-hoc fault model. The results demonstrate the benefits of coverage discounting: it is able to correctly distinguish high and low-quality tests with similar coverage scores as well as expose checker insufficiencies.

[1]  M. Hampton,et al.  Leveraging a Commercial Mutation Analysis Tool For Research , 2007, Testing: Academic and Industrial Conference Practice and Research Techniques - MUTATION (TAICPART-MUTATION 2007).

[2]  Brian Bailey Can Mutation Analysis Help Fix Our Broken Coverage Metrics? , 2008, Haifa Verification Conference.

[3]  Gregg Rothermel,et al.  An experimental determination of sufficient mutant operators , 1996, TSEM.

[4]  Gregg Rothermel,et al.  An experimental evaluation of selective mutation , 1993, Proceedings of 1993 15th International Conference on Software Engineering.

[5]  Kurt Keutzer,et al.  Coverage Metrics for Functional Validation of Hardware Designs , 2001, IEEE Des. Test Comput..

[6]  Kwang-Ting Cheng,et al.  SCEMIT: A SystemC error and mutation injection tool , 2010, Design Automation Conference.

[7]  Kwang-Ting Cheng,et al.  An instrumented observability coverage method for system validation , 2009, 2009 IEEE International High Level Design Validation and Test Workshop.

[8]  T. Mudge,et al.  Evaluation of Design Error Models for Verification Testing of Microprocessors 1 , 1998 .

[9]  William E. Howden,et al.  Weak Mutation Testing and Completeness of Test Sets , 1982, IEEE Transactions on Software Engineering.

[10]  Xiaowei Li,et al.  Observability Statement Coverage Based on Dynamic Factored Use-Definition Chains for Functional Verification , 2006, J. Electron. Test..

[11]  Alon Gluska Coverage-oriented verification of Banias , 2003, Proceedings 2003. Design Automation Conference (IEEE Cat. No.03CH37451).

[12]  Mark Harman,et al.  Using program slicing to assist in the detection of equivalent mutants , 1999, Softw. Test. Verification Reliab..

[13]  Avi Ziv,et al.  Coverage directed test generation for functional verification using Bayesian networks , 2003, Proceedings 2003. Design Automation Conference (IEEE Cat. No.03CH37451).

[14]  Richard J. Lipton,et al.  Hints on Test Data Selection: Help for the Practicing Programmer , 1978, Computer.

[15]  A. Jefferson Offutt,et al.  Using compiler optimization techniques to detect equivalent mutants , 1994, Softw. Test. Verification Reliab..

[16]  A. Jefferson Offutt,et al.  A semantic model of program faults , 1996, ISSTA '96.

[17]  Magdy S. Abadir,et al.  Coverage metrics for verification of concurrent SystemC designs using mutation testing , 2010, 2010 IEEE International High Level Design Validation and Test Workshop (HLDVT).

[18]  Mark Harman,et al.  An Analysis and Survey of the Development of Mutation Testing , 2011, IEEE Transactions on Software Engineering.