Accounting for realities when estimating the field failure rate of software

A realistic estimate of the field failure rate of software is essential in order to decide when to release the software while maintaining an appropriate balance between reliability, time-to-market and development cost. Typically, software reliability models are applied to system test data with the hope of obtaining an estimate of the software failure rate that will be observed in the field. Unfortunately, test environments are usually quite different from field environments. In this paper, we use a calibration factor to characterize the mismatch between the system test environment and the field environment, and then incorporate the factor into a widely used software reliability model. For projects that have both system test data and field data for one or more previous releases, the calibration factor can be empirically evaluated and used to estimate the field failure rate of a new release based on its system test data. For new projects, the calibration factor can be estimated by matching the software to related projects that have both system test data and field data. In practice, isolating and removing a software fault is a complicated process. As a result, a fault may be encountered more than once before it is ultimately removed. Most software reliability growth models assume instantaneous fault removal. We relax this assumption by relating non-zero fault removal times to imperfect debugging. Finally, we distinguish between two types of faults based on whether their observed occurrence would precipitate a fix in the current or future release, respectively. Type-F faults, which are fixed in the current release, contribute to a growth component of the overall failure rate. Type-D faults, whose fix is deferred to a subsequent release, contribute to a constant component of the overall software failure rate. The aggregate software failure rate is thus the sum of a decreasing failure rate and a constant failure rate.

[1]  Hoang Pham,et al.  A Software Cost Model with Warranty and Risk Costs , 1999, IEEE Trans. Computers.

[2]  Mitsuru Ohba,et al.  Does imperfect debugging affect software reliability growth? , 1989, ICSE '89.

[3]  Hoang Pham,et al.  Calibrating software reliability models when the test environment does not match the user environment , 2002 .

[4]  Wilhelm Kremer,et al.  Birth-Death and Bug Counting , 1983, IEEE Transactions on Reliability.

[5]  Hoang Pham,et al.  A general imperfect-software-debugging model with S-shaped fault-detection rate , 1999 .

[6]  Hoang Pham,et al.  Software release policies with gain in reliability justifying the costs , 1999, Ann. Softw. Eng..

[7]  Xuemei Zhang,et al.  An NHPP Software Reliability Model and Its Comparison , 1997 .

[8]  Shigeru Yamada,et al.  Imperfect debugging models with fault introduction rate for software reliability assessment , 1992 .

[9]  AMRIT L. GOEL,et al.  A Markovian model for reliability and other performance measures of software systems* , 1979, 1979 International Workshop on Managing Requirements Knowledge (MARK).

[10]  K Okumoto,et al.  TIME-DEPENDENT ERROR-DETECTION RATE MODEL FOR SOFTWARE AND OTHER PERFORMANCE MEASURES , 1979 .

[11]  Nikolaos S. Papageorgiou Minimax control of maximal monotone differential inclusions in ℝN , 1992 .

[12]  Amrit L. Goel,et al.  Time-Dependent Error-Detection Rate Model for Software Reliability and Other Performance Measures , 1979, IEEE Transactions on Reliability.

[13]  Hoang Pham,et al.  A software cost model with error removal times and risk costs , 1998, Int. J. Syst. Sci..

[14]  Daniel R. Jeske,et al.  A BAYESIAN METHODOLOGY FOR ESTIMATING THE FAILURE RATE OF SOFTWARE , 2000 .

[15]  Willa K. Ehrlich,et al.  Determining the cost of a stop-test decision (software reliability) , 1993, IEEE Software.