Assessing the Risk due to Software Faults: Estimates of Failure Rate versus Evidence of Perfection

In the debate over the assessment of software reliability (or safety), as applied to critical software, two extreme positions can be discerned: the "statistical" position, which requires that the claims of reliability be supported by statistical inference from realistic testing or operation, and the "perfectionist" position, which requires convincing indications that the software is free from defects. These two positions naturally lead to requiring different kinds of supporting evidence, and actually to stating the dependability requirements in different ways, not allowing any direct comparison. There is often confusion about the relationship between statements about software failure rates and about software correctness, and about which evidence can support either kind of statement. This note clarifies the meaning of the two kinds of statements and how they relate to the probability of failure-free operation, and discusses their practical merits, especially for high required reliability or safety.

[1]  Lorenzo Strigini,et al.  Predicting Software Reliability from Testing Taking into Account Other Knowledge about a Program , 1996 .

[2]  M NicolDavid,et al.  Estimating the Probability of Failure When Testing Reveals No Failures , 1992 .

[3]  David Lorge Parnas,et al.  Evaluation of safety-critical software , 1990, CACM.

[4]  Lorenzo Strigini,et al.  Rigorously assessing software reliability and safety , 1996 .

[5]  David Wright,et al.  Some Conservative Stopping Rules for the Operational Testing of Safety-Critical Software , 1997, IEEE Trans. Software Eng..

[6]  Richard G. Hamlet Are we testing for true reliability? , 1992, IEEE Software.

[7]  Marie-Claude Gaudel,et al.  Rare events in stochastic dynamical systems and failures in ultra-reliable reactive programs , 1998, Digest of Papers. Twenty-Eighth Annual International Symposium on Fault-Tolerant Computing (Cat. No.98CB36224).

[8]  William E. Howden,et al.  Software trustability analysis , 1995, TSEM.

[9]  James H. Fetzer Program verification: the very idea , 1988, CACM.

[10]  Keith W. Miller,et al.  Confidently Assessing a Zero Probability of Software Failure , 1993, SAFECOMP.

[11]  Lorenzo Strigini,et al.  On the Use of Testability Measures for Dependability Assessment , 1996, IEEE Trans. Software Eng..

[12]  Jeffrey M. Voas,et al.  Estimating the Probability of Failure When Testing Reveals No Failures , 1992, IEEE Trans. Software Eng..

[13]  Bev Littlewood,et al.  Applying Bayesian Belief Networks to System Dependability Assessment , 1996, SSS.

[14]  R GarmanJohn The "BUG" heard 'round the world , 1981 .

[15]  Lorenzo Strigini,et al.  Acceptance Criteria for Critical Software Based on Testability Estimates and Test Results , 1996, SAFECOMP.

[16]  Richard G. Hamlet,et al.  Probable Correctness Theory , 1987, Inf. Process. Lett..

[17]  Lorenzo Strigini On Testing Process Control Software for Reliability Assessment: the Effects of Correlation between Successive Failures , 1996, Softw. Test. Verification Reliab..

[18]  David Wright,et al.  A Bayesian Model that Combines Disparate Evidence for the Quantitative Assessment of System Dependability , 1998, SAFECOMP.