On the Saturation of $n$-Detection Test Generation by Different Definitions With Increased $n$

An n-detection test set contains different tests for each target fault. The value of is typically determined based on test set size constraints, and certain values have become standard. Appropriate values for are investigated in this paper by considering the saturation of the n-detection test generation process. As is increased, eventually, the rate of increase in test set quality starts dropping. Saturation occurs when the increase in test set quality with drops below a certain level. Three parameters of an n-detection test set are introduced to measure the saturation of the test generation process: 1) the fraction of faults detected times or less by the test set; 2) the fraction of faults detected fewer than times by the test set; and 3) the test set size relative to the size of a one-detection test set. It is demonstrated that the behavior of each one of these parameters follows a unique pattern as is increased, and certain features of this behavior can be used to identify saturation. All the parameters can be efficiently computed during the test generation process.

[1]  Irith Pomeranz,et al.  On the saturation of n-detection test sets with increased n , 2007, 2007 IEEE International Test Conference.

[2]  Irith Pomeranz,et al.  Definitions of the numbers of detections of target faults and their effectiveness in guiding test generation for high defect coverage , 2001, Proceedings Design, Automation and Test in Europe. Conference and Exhibition 2001.

[3]  Edward J. McCluskey,et al.  Analysis of pattern-dependent and timing-dependent failures in an experimental test chip , 1998, Proceedings International Test Conference 1998 (IEEE Cat. No.98CH36270).

[4]  Sandip Kundu,et al.  Defect-Based Test : A Key Enabler for Successful Migration to Structural Test , 1999 .

[5]  Melvin A. Breuer,et al.  Digital systems testing and testable design , 1990 .

[6]  Edward J. McCluskey,et al.  An experimental chip to evaluate test techniques experiment results , 1995, Proceedings of 1995 IEEE International Test Conference (ITC).

[7]  Irith Pomeranz,et al.  Worst-case and average-case analysis of n-detection test sets , 2005, Design, Automation and Test in Europe.

[8]  Janusz Rajski,et al.  Impact of multiple-detect test patterns on product quality , 2003, International Test Conference, 2003. Proceedings. ITC 2003..

[9]  Venkatram Krishnaswamy,et al.  A study of bridging defect probabilities on a Pentium (TM) 4 CPU , 2001, Proceedings International Test Conference 2001 (Cat. No.01CH37260).

[10]  M. Ray Mercer,et al.  REDO-random excitation and deterministic observation-first commercial experiment , 1999, Proceedings 17th IEEE VLSI Test Symposium (Cat. No.PR00146).

[11]  Irith Pomeranz,et al.  Forming N-detection test sets from one-detection test sets without test generation , 2005, IEEE International Conference on Test, 2005..

[12]  M. Ray Mercer,et al.  Enhanced DO-RE-ME based defect level prediction using defect site aggregation-MPG-D , 2000, Proceedings International Test Conference 2000 (IEEE Cat. No.00CH37159).

[13]  Enamul Amyeen,et al.  An experimental study of N-detect scan ATPG patterns on a processor , 2004, 22nd IEEE VLSI Test Symposium, 2004. Proceedings..

[14]  Irith Pomeranz,et al.  Compact test sets for high defect coverage , 1997, IEEE Trans. Comput. Aided Des. Integr. Circuits Syst..

[15]  Edward J. McCluskey,et al.  Multiple-output propagation transition fault test , 2001, Proceedings International Test Conference 2001 (Cat. No.01CH37260).