Assessing the risk due to software faults: estimates of failure rate versus evidence of perfection

In the debate over the assessment of software reliability (or safety), as applied to critical software, two extreme positions can be discerned: the ‘statistical’ position, which requires that the claims of reliability be supported by statistical inference from realistic testing or operation, and the ‘perfectionist’ position, which requires convincing indications that the software is free from defects. These two positions naturally lead to requiring different kinds of supporting evidence, and actually to stating the dependability requirements in different ways, not allowing any direct comparison. There is often confusion about the relationship between statements about software failure rates and about software correctness, and about which evidence can support either kind of statement. This note clarifies the meaning of the two kinds of statement and how they relate to the probability of failure‐free operation, and discusses their practical merits, especially for high required reliability or safety. © 1998 John Wiley & Sons, Ltd.

[1]  David Lorge Parnas,et al.  Evaluation of safety-critical software , 1990, CACM.

[2]  Lorenzo Strigini,et al.  Rigorously assessing software reliability and safety , 1996 .

[3]  Lorenzo Strigini,et al.  Predicting Software Reliability from Testing Taking into Account Other Knowledge about a Program , 1996 .

[4]  Richard G. Hamlet,et al.  Probable Correctness Theory , 1987, Inf. Process. Lett..

[5]  Richard G. Hamlet Are we testing for true reliability? , 1992, IEEE Software.

[6]  Bev Littlewood,et al.  Applying Bayesian Belief Networks to System Dependability Assessment , 1996, SSS.

[7]  Jeffrey M. Voas,et al.  Estimating the Probability of Failure When Testing Reveals No Failures , 1992, IEEE Trans. Software Eng..

[8]  Marie-Claude Gaudel,et al.  Rare events in stochastic dynamical systems and failures in ultra-reliable reactive programs , 1998, Digest of Papers. Twenty-Eighth Annual International Symposium on Fault-Tolerant Computing (Cat. No.98CB36224).

[9]  James H. Fetzer Program verification: the very idea , 1988, CACM.

[10]  Keith W. Miller,et al.  Confidently Assessing a Zero Probability of Software Failure , 1993, SAFECOMP.

[11]  David Wright,et al.  Some Conservative Stopping Rules for the Operational Testing of Safety-Critical Software , 1997, IEEE Trans. Software Eng..

[12]  David Wright,et al.  A Bayesian Model that Combines Disparate Evidence for the Quantitative Assessment of System Dependability , 1998, SAFECOMP.

[13]  Lorenzo Strigini,et al.  Acceptance Criteria for Critical Software Based on Testability Estimates and Test Results , 1996, SAFECOMP.

[14]  William E. Howden,et al.  Software trustability analysis , 1995, TSEM.

[15]  R GarmanJohn The "BUG" heard 'round the world , 1981 .

[16]  Lorenzo Strigini On Testing Process Control Software for Reliability Assessment: the Effects of Correlation between Successive Failures , 1996 .

[17]  Lorenzo Strigini,et al.  On the Use of Testability Measures for Dependability Assessment , 1996, IEEE Trans. Software Eng..