Providing an empirical basis for optimizing the verification and testing phases of software development

Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault density components so that testing/verification effort can be concentrated where needed. Such a strategy is ejected to detect more faults and thus improve the resulting reliability of the overall system. The authors present an alternative approach for constructing such models that is intended to fulfil specific software engineering needs, (i.e. dealing with partial/incomplete information and creating models that are easy to interpret). The approach to classification is to: measure the software system to be considered; and to build multivariate stochastic models for prediction. The authors present experimental results obtained by classifying FORTRAN components into two fault density classes: low and high. They also evaluate the accuracy of the model and the insights it provides into the software process.<<ETX>>