Operating System Robustness Forecast and Selection

While commercial off-the-shelf (COTS) operating systems (OS) have long been widely used, the issue regarding their robustness is far from being solved. Although many efforts have been made in this research domain, people still find it difficult to make choices among various OSs for robustness concerns. This paper proposes a reference model for OS robustness forecast and selection that aims to forecast the robustness of specific OSs under given operational profiles. At the same time, the model can select appropriate OSs as development/operating platforms that meet the particular robustness requirements of the target workloads. Our model combines OSspsila overall robustness with operational profiles and uses extensive tests on OS APIs to make our calculation. We have measured 255 APIs and C-library functions on windows XP and Vista, and 197 C-library functions on Linux 2.6.22 (Ubuntu 7.10). Our results show that on average Windows XP and Vista are more robust than Linux, but their performances are comparable under compute-intensive workloads. A demonstration of how these results are used in the proposed reference model for OSs robustness forecast and selection is given at the end.

[1]  Neeraj Suri,et al.  Error propagation profiling of operating systems , 2005, 2005 International Conference on Dependable Systems and Networks (DSN'05).

[2]  John D. Musa,et al.  Operational profiles in software-reliability engineering , 1993, IEEE Software.

[3]  Barton P. Miller,et al.  An empirical study of the reliability of UNIX utilities , 1990, Commun. ACM.

[4]  Charles P. Shelton,et al.  Robustness testing of the Microsoft Win32 API , 2000, Proceeding International Conference on Dependable Systems and Networks. DSN 2000.

[5]  Michael D. Ernst,et al.  Feedback-Directed Random Test Generation , 2007, 29th International Conference on Software Engineering (ICSE'07).

[6]  P MillerBarton,et al.  An empirical study of the robustness of MacOS applications using random testing , 2007 .

[7]  Barton P. Miller,et al.  Fuzz Revisited: A Re-examination of the Reliability of UNIX Utilities and Services , 1995 .

[8]  Shuvendu K. Lahiri,et al.  Finding errors in .net with feedback-directed random testing , 2008, ISSTA '08.

[9]  Philip Koopman,et al.  Comparing the robustness of POSIX operating systems , 1999, Digest of Papers. Twenty-Ninth Annual International Symposium on Fault-Tolerant Computing (Cat. No.99CB36352).

[10]  Anup K. Ghosh,et al.  An approach to testing COTS software for robustness to operating system exceptions and errors , 1999, Proceedings 10th International Symposium on Software Reliability Engineering (Cat. No.PR00443).

[11]  Dick Hamlet When only random testing will do , 2006, RT '06.

[12]  John L. Henning SPEC CPU2006 benchmark descriptions , 2006, CARN.

[13]  Barton P. Miller,et al.  An empirical study of the robustness of MacOS applications using random testing , 2006, RT '06.

[14]  Jean Arlat,et al.  Benchmarking the dependability of Windows NT4, 2000 and XP , 2004, International Conference on Dependable Systems and Networks, 2004.

[15]  Neeraj Suri,et al.  On the Selection of Error Model(s) for OS Robustness Evaluation , 2007, 37th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN'07).

[16]  Neeraj Suri,et al.  On the Impact of Injection Triggers for OS Robustness Evaluation , 2007, The 18th IEEE International Symposium on Software Reliability (ISSRE '07).

[17]  Barton P. Miller,et al.  An empirical study of the robustness of Windows NT applications using random testing , 2000 .

[18]  Yves Crouzet,et al.  Benchmarking the dependability of Windows and Linux using PostMark/spl trade/ workloads , 2005, 16th IEEE International Symposium on Software Reliability Engineering (ISSRE'05).

[19]  Christof Fetzer,et al.  Robustness and Security Hardening of COTS Software Libraries , 2007, 37th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN'07).

[20]  Alex Groce,et al.  Randomized Differential Testing as a Prelude to Formal Verification , 2007, 29th International Conference on Software Engineering (ICSE'07).

[21]  Nuno Ferreira Neves,et al.  Robustness Testing of the Windows DDK , 2007, 37th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN'07).

[22]  Daniel P. Siewiorek,et al.  Measuring Software Dependability by Robustness Benchmarking , 1997, IEEE Trans. Software Eng..

[23]  Anup K. Ghosh,et al.  Testing the robustness of Windows NT software , 1998, Proceedings Ninth International Symposium on Software Reliability Engineering (Cat. No.98TB100257).