Dependability Benchmarking of Web-Servers

The assessment of the dependability properties of a system (dependability benchmarking) is a critical step when choosing among similar components/products. This paper presents a proposal for the benchmarking of the dependability properties of web-servers. Our benchmark is composed of the three key components: measures, workload, and faultload. We use the SPECWeb99 benchmark as starting point, adopting the workload and performance measures from this performance benchmark, and we added the faultload and new measures related to dependability. We illustrate the use of the proposed benchmark through a case-study involving two widely used web servers (Apache and Abyss) running on top of three different operating systems. The faultloads used encompass software faults, hardware faults and network faults. We show that by using the proposed dependability benchmark it is possible to observe clear differences regarding dependability properties of the web-servers.

[1]  Johan Karlsson,et al.  Fault injection into VHDL models: the MEFISTO tool , 1994 .

[2]  Ram Chillarege,et al.  Generation of an error set that emulates software faults based on field data , 1996, Proceedings of Annual Symposium on Fault Tolerant Computing.

[3]  Henrique Madeira,et al.  Characterization of operating systems behavior in the presence of faulty drivers through software fault emulation , 2002, 2002 Pacific Rim International Symposium on Dependable Computing, 2002. Proceedings..

[4]  Ji Zhu,et al.  The system recovery benchmark , 2004, 10th IEEE Pacific Rim International Symposium on Dependable Computing, 2004. Proceedings..

[5]  Henrique Madeira,et al.  Generic faultloads based on software faults for dependability benchmarking , 2004, International Conference on Dependable Systems and Networks, 2004.

[6]  Jean Arlat,et al.  Benchmarking the dependability of Windows NT4, 2000 and XP , 2004, International Conference on Dependable Systems and Networks, 2004.

[7]  Philip Koopman,et al.  Comparing the robustness of POSIX operating systems , 1999, Digest of Papers. Twenty-Ninth Annual International Symposium on Fault-Tolerant Computing (Cat. No.99CB36352).

[8]  Marco Vieira,et al.  A Dependability Benchmark for OLTP Application Environments , 2003, VLDB.

[9]  Philip Koopman,et al.  Dependability Benchmarking & Prediction: A Grand Challenge Technology Problem , 1999 .

[10]  Jean Arlat,et al.  Fault Injection and Dependability Evaluation of Fault-Tolerant Systems , 1993, IEEE Trans. Computers.

[11]  David A. Patterson,et al.  Experience with evaluating human-assisted recovery processes , 2004, International Conference on Dependable Systems and Networks, 2004.

[12]  Jim Gray,et al.  Benchmark Handbook: For Database and Transaction Processing Systems , 1992 .

[13]  Henrique Madeira,et al.  Definition of software fault emulation operators: a field data study , 2003, 2003 International Conference on Dependable Systems and Networks, 2003. Proceedings..

[14]  Ji Zhu,et al.  Robustness benchmarking for hardware maintenance events , 2003, 2003 International Conference on Dependable Systems and Networks, 2003. Proceedings..

[15]  Brendan Murphy,et al.  Progress on Defining Standardized Classes for Comparing the Dependability of Computer Systems , 2002 .