Probabilistic Models for Software reliability Prediction

Summary With the advent of large sophisticated hardware-software systems developed in the 1960s, the problem of computer system reliability has emerged. The reliability of computer hardware can be modeled in much the same way as other devices using conventional reliability theory; however, computer software errors require a different approach. This paper discusses a newly developed probabilistic model for predicting software reliability. The model constants are calculated from error data collected from similar previous programs. The calculations result in a decreasing probability of no software errors versus operating time (reliability function). The rate at which reliability decreases is a function of the man-months of debugging time. Similarly, the mean time between operational software errors (MTBF) is obtained. The MTBF increases slowly and then more rapidly as the debugging effort (man-months) increases. The model permits estimation of software reliability before any code is written and allows later updating to improve the accuracy of the parameters when integration or operational tests begin.