The establishment and use of measures to evaluate the quality of software designs

It has been recognized that success in producing designs that realize reliable software, even using Structured Design, is intimately dependent on the experience level of the designer. The gap in this methodology is the absence of easily applied quantitative measures of quality that ease the dependence of reliable systems on the rare availability of expert designers. Several metrics have been devised which, when applied to design structure charts, can pinpoint sections of a design that may cause problems during coding, debugging, integration, and modification. These metrics can help provide an independent, unbiased evaluation of design quality. These metrics have been validated against program error data of two recently completed software projects at Hughes. The results indicate that the metrics can provide a predictive measure of program errors experienced during program development. Guidelines for interpreting the design metric values are summarized and a brief description of an interactive structure chart graphics system to simplify metric value calculation is presented.