Drivers for Customer Perceived Software Quality

Predicting software quality as perceived by a customer may allow an organization to adjust deployment to meet the quality expectations of its customers, to allocate the appropriate amount of maintenance resources, and to help direct quality improvement efforts to maximize return on investment. However, customer perceived quality may be affected not simply by the software content and the development process, but also by a number of other factors including deployment issues, amount of usage, software platforms and hardware configurations. We predict customer perceived quality as measured by various service interactions, including software defect reports, request for assistance, and field technician dispatches using the afore mentioned and other factors for a large software system. We employ the non-intrusive data gathering technique of using existing data captured in automated project monitoring and tracking systems as well as customer support and tracking systems. We find that the effect of deployment schedule, hardware platform, and software configurations can increase the probability of observing failures more that 20 times, Furthermore, we found that the factors affected all quality measures in similar fashion. Our theoretical model could be applied at other organizations, and we suggest methods to independently validate and replicate our results.

[1]  S. Weisberg Applied Linear Regression, 2nd Edition. , 1987 .

[2]  John D. Musa,et al.  Software reliability - measurement, prediction, application , 1987, McGraw-Hill series in software engineering and technology.

[3]  C. Mallows,et al.  When Should One Stop Testing Software , 1988 .

[4]  Michael Buckley,et al.  Discovering relationships between service and customer satisfaction , 1995, Proceedings of International Conference on Software Maintenance.

[5]  Ram Chillarege,et al.  Measurement of failure rate in widely distributed software , 1995, Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers.

[6]  N. Breslow,et al.  Statistics in Epidemiology : The Case-Control Study , 2008 .

[7]  Michael R. Lyu,et al.  Handbook of software reliability engineering , 1996 .

[8]  Padmanabhan Santhanam,et al.  Use of software triggers to evaluate software process effectiveness and capture customer usage profiles , 1997, Proceedings The Eighth International Symposium on Software Reliability Engineering.

[9]  Jeffrey M. Voas User Participation-based Software Certification , 1999, EUROVAV.

[10]  Taghi M. Khoshgoftaar,et al.  Application of a usage profile in software quality models , 1999, Proceedings of the Third European Conference on Software Maintenance and Reengineering (Cat. No. PR00090).

[11]  Audris Mockus,et al.  Predicting risk of software changes , 2000, Bell Labs Technical Journal.

[12]  Harvey P. Siy,et al.  Predicting Fault Incidence Using Software Change History , 2000, IEEE Trans. Software Eng..

[13]  M. Shaw,et al.  The Potential for Synergy Between Certification and Insurance , 2002 .

[14]  Audris Mockus,et al.  Using Version Control Data to Evaluate the Impact of Software Tools: A Case Study of the Version Editor , 2002, IEEE Trans. Software Eng..

[15]  Alan P. Wood,et al.  Software Reliability from the Customer View , 2003, Computer.

[16]  Audris Mockus,et al.  Understanding and predicting effort in software projects , 2003, 25th International Conference on Software Engineering, 2003. Proceedings..

[17]  Mary Shaw,et al.  Empirical evaluation of defect projection models for widely-deployed production software systems , 2004, SIGSOFT '04/FSE-12.