Measuring Software Dependability by Robustness Benchmarking

Inability to identify weaknesses or to quantify advancements in software system robustness frequently hinders the development of robust software systems. Efforts have been made to develop benchmarks of software robustness to address this problem, but they all suffer from significant shortcomings. The paper presents the various features that are desirable in a benchmark of system robustness, and evaluates some existing benchmarks according to these features. A new hierarchically structured approach to building robustness benchmarks, which overcomes many deficiencies of past efforts, is also presented. This approach has been applied to building a hierarchically structured benchmark that tests part of the Unix file and virtual memory systems. The resultant benchmark has successfully been used to identify new response class structures that were not detected in a similar situation by other less organized techniques.

[1]  Chorus Systemes,et al.  Overview of the CHORUS? Distributed Operating Systems , 1991 .

[2]  Nelson Weiderman Hartstone: synthetic benchmark requirements for hard real-time applications , 1990 .

[3]  Daniel P. Siewiorek,et al.  Fault Injection Experiments Using FIAT , 1990, IEEE Trans. Computers.

[4]  Nelson Weiderman Hartstone: synthetic benchmark requirements for hard real-time applications , 1990 .

[5]  James A. Gosling,et al.  The java language environment: a white paper , 1995 .

[6]  Reinhold Weicker,et al.  Dhrystone: a synthetic systems programming benchmark , 1984, CACM.

[7]  Brian A. Wichmann,et al.  A Synthetic Benchmark , 1976, Comput. J..

[8]  Graham Hamilton,et al.  The Spring Nucleus: A Microkernel for Objects , 1993 .

[9]  Jacob A. Abraham,et al.  FERRARI: a tool for the validation of system dependability properties , 1992, [1992] Digest of Papers. FTCS-22: The Twenty-Second International Symposium on Fault-Tolerant Computing.

[10]  Rodger Lea,et al.  Implementing a modular object-oriented operating system on top of Chorus , 1993, Distributed Syst. Eng..

[11]  Joe Marshall,et al.  Measuring robustness of a fault tolerant aerospace system , 1995, Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers.

[12]  C. Q. Lee,et al.  The Computer Journal , 1958, Nature.

[13]  Claude Kaiser,et al.  Overview of the CHORUS ® Distributed Operating Systems , 1991 .

[14]  Barton P. Miller,et al.  An empirical study of the reliability of UNIX utilities , 1990, Commun. ACM.

[15]  Flaviu Cristian,et al.  Understanding fault-tolerant distributed systems , 1991, CACM.

[16]  Roy H. Campbell,et al.  Choices: a parallel object-oriented operating system , 1993 .

[17]  Daniel P. Siewiorek,et al.  Development of a benchmark to measure system robustness: experiences and lessons learned , 1992, [1992] Proceedings Third International Symposium on Software Reliability Engineering.

[18]  Mark Sullivan,et al.  Software defects and their impact on system availability-a study of field failures in operating systems , 1991, [1991] Digest of Papers. Fault-Tolerant Computing: The Twenty-First International Symposium.

[19]  Ravishankar K. Iyer,et al.  FINE: A Fault Injection and Monitoring Environment for Tracing the UNIX System Behavior under Faults , 1993, IEEE Trans. Software Eng..