Evaluation of an efficient control-oriented coverage metric

Dynamic verification, the use of simulation to determine design correctness, is widely used due to its tractability for large designs. A serious limitation of dynamic techniques is the difficulty in determining whether or not a test sequence is sufficient to detect all likely design errors. Coverage metrics are used to address this problem by providing a set of goals to be achieved during the simulation process; if all coverage goals are satisfied then the test sequence is assumed to be complete. Many coverage metrics have been proposed but no effort has been made to identify a correlation between existing metrics and design quality. In this paper we present a technique to evaluate a coverage metric by examining its ability to ensure the detection of real design errors. We apply our evaluation technique to our control-oriented coverage metric to verify its ability to reveal design errors.

[1]  I.G. Harris,et al.  An efficient control-oriented coverage metric , 2005, Proceedings of the ASP-DAC 2005. Asia and South Pacific Design Automation Conference, 2005..

[2]  Ian G. Harris,et al.  A method for the evaluation of behavioral fault models , 2003, Eighth IEEE International High-Level Design Validation and Test Workshop.

[3]  Boris Beizer,et al.  Software Testing Techniques , 1983 .

[4]  F. Corno,et al.  Automatic test bench generation for validation of RT-level descriptions: an industrial experience , 2000, Proceedings Design, Automation and Test in Europe Conference and Exhibition 2000 (Cat. No. PR00537).

[5]  Bob Bentley,et al.  Validating the Intel(R) Pentium(R) 4 microprocessor , 2001, Proceedings of the 38th Design Automation Conference (IEEE Cat. No.01CH37232).

[6]  Chantal Robach,et al.  From specification validation to hardware testing: a unified method , 1996, Proceedings International Test Conference 1996. Test and Design Validity.

[7]  Xiaowei Li,et al.  An efficient observability evaluation algorithm based on Factored Use-Def chains , 2003, 2003 Test Symposium.

[8]  Kwang-Ting Cheng,et al.  A functional fault model for sequential machines , 1992, IEEE Trans. Comput. Aided Des. Integr. Circuits Syst..

[9]  Lionel C. Briand,et al.  Using simulation to empirically investigate test coverage criteria based on statechart , 2004, Proceedings. 26th International Conference on Software Engineering.

[10]  Nadeem Malik,et al.  Automaton: an autonomous coverage-based multiprocessor system verification environment , 1997, Proceedings 8th IEEE International Workshop on Rapid System Prototyping Shortening the Path from Specification to Prototype.

[11]  Janusz W. Laski,et al.  A Data Flow Oriented Program Testing Strategy , 1983, IEEE Transactions on Software Engineering.

[12]  Anneliese Amschler Andrews,et al.  On choosing test criteria for behavioral level hardware design verification , 2000, Proceedings IEEE International High-Level Design Validation and Test Workshop (Cat. No.PR00786).

[13]  Ian G. Harris,et al.  A data flow fault coverage metric for validation of behavioral HDL descriptions , 2000, IEEE/ACM International Conference on Computer Aided Design. ICCAD - 2000. IEEE/ACM Digest of Technical Papers (Cat. No.00CH37140).

[14]  Jacob A. Abraham,et al.  Abstraction Techniques for Validation Coverage Analysis and Test Generation , 1998, IEEE Trans. Computers.

[15]  R. Kalyanaraman,et al.  Generation of design verification tests from behavioral VHDL programs using path enumeration and constraint programming , 1995, IEEE Trans. Very Large Scale Integr. Syst..