Reliability Assessment of Mass-Market Software: Insights from Windows Vista®

Assessing the reliability of mass-market software (MMS), such as the Windowsreg operating system, presents many challenges. In this paper, we share insights gained from the Windows Vistareg and Windows Vistareg SP1 operating systems. First, we find that the automated reliability monitoring approach, which periodically reports reliability status, provides higher quality data and requires less effort compared to other approaches available today. We describe one instance in detail: the Windows reliability analysis component, and illustrate its advantages using data from Windows Vista. Second, we show the need to account for usage scenarios during reliability assessments. For pre-release versions of Windows Vista and Vista SP1, usage scenarios differ by 2-4X for Microsoft internal and external samples; corresponding reliability assessments differ by 2-3X. Our results help motivate and guide further research in reliability assessment.

[1]  M. Kenward,et al.  An Introduction to the Bootstrap , 2007 .

[2]  M. Akber Qureshi,et al.  Estimating the failure rate of evolving software systems , 2000, Proceedings 11th International Symposium on Software Reliability Engineering. ISSRE 2000.

[3]  Michael Buckley,et al.  Discovering relationships between service and customer satisfaction , 1995, Proceedings of International Conference on Software Maintenance.

[4]  Alessandro Orso,et al.  A Technique for Enabling and Supporting Debugging of Field Failures , 2007, 29th International Conference on Software Engineering (ICSE'07).

[5]  Mario R. Garzia,et al.  Assessing End-User Reliability Prior To Product Ship , 2007 .

[6]  John D. Musa,et al.  Operational profiles in software-reliability engineering , 1993, IEEE Software.

[7]  Paul Luo Li,et al.  Estimating the Quality of Widely Used Software Products Using Software Reliability Growth Modeling: Case Study of an IBM Federated Database Project , 2007, First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007).

[8]  Andy Podgurski,et al.  Corroborating User Assessments of Software Behavior to Facilitate Operational Testing , 2007, The 18th IEEE International Symposium on Software Reliability (ISSRE '07).

[9]  Sunita Chulani,et al.  Deriving a Software Quality View from Customer Satisfaction and Service Data , 2001 .

[10]  Martin Höst,et al.  Sensitivity of Website Reliability to Usage Profile Changes , 2007, The 18th IEEE International Symposium on Software Reliability (ISSRE '07).

[11]  Elliot Soloway,et al.  Where the bugs are , 1985, CHI '85.

[12]  Katerina Goseva-Popstojanova,et al.  Empirical Characterization of Session–Based Workload and Reliability for Web Servers , 2006, Empirical Software Engineering.

[13]  Marlin L. Gendron,et al.  Electronic Moving Map , 2003 .

[14]  Norman F. Schneidewind,et al.  Applying reliability models to the space shuttle , 1992, IEEE Software.

[15]  Michael R. Lyu,et al.  Handbook of software reliability engineering , 1996 .

[16]  Brendan Murphy Automating Software Failure Reporting , 2004, ACM Queue.

[17]  Mary Shaw,et al.  Empirical evaluation of defect projection models for widely-deployed production software systems , 2004, SIGSOFT '04/FSE-12.

[18]  Ping Zhang,et al.  Predictors of customer perceived software quality , 2005, Proceedings. 27th International Conference on Software Engineering, 2005. ICSE 2005..

[19]  Pankaj Jalote,et al.  Stabilization Time - A Quality Metric for Software Products , 2006, 2006 17th International Symposium on Software Reliability Engineering.

[20]  Gregory Tassey,et al.  Prepared for what , 2007 .

[21]  Xiaojin Zhu,et al.  Statistical Debugging Using Latent Topic Models , 2007, ECML.

[22]  Myra B. Cohen,et al.  Probe Distribution Techniques to Profile Events in Deployed Software , 2006, 2006 17th International Symposium on Software Reliability Engineering.