Pragmatic prioritization of software quality assurance efforts

A plethora of recent work leverages historical data to help practitioners better prioritize their software quality assurance efforts. However, the adoption of this prior work in practice remains low. In our work, we identify a set of challenges that need to be addressed to make previous work on quality assurance prioritization more pragmatic. We outline four guidelines that address these challenges to make prior work on software quality assurance more pragmatic: 1) Focused Granularity (i.e., small prioritization units), 2) Timely Feedback (i.e., results can be acted on in a timely fashion), 3) Estimate Effort (i.e., estimate the time it will take to complete tasks), and 4) Evaluate Generality (i.e., evaluate findings across multiple projects and multiple domains). We present two approaches, at the code and change level, that demonstrate how prior approaches can be more pragmatic.

[1]  Lionel C. Briand,et al.  Predicting fault-prone components in a java legacy system , 2006, ISESE '06.

[2]  Andreas Zeller,et al.  Predicting faults from cached history , 2008, ISEC '08.

[3]  Richard C. Holt,et al.  The top ten list: dynamic fault prediction , 2005, 21st IEEE International Conference on Software Maintenance (ICSM'05).

[4]  Ahmed E. Hassan,et al.  Understanding the impact of code and process metrics on post-release defects: a case study on the Eclipse project , 2010, ESEM '10.

[5]  Audris Mockus,et al.  Future of Mining Software Archives: A Roundtable , 2009, IEEE Software.

[6]  A. Zeller,et al.  Predicting Defects for Eclipse , 2007, Third International Workshop on Predictor Models in Software Engineering (PROMISE'07: ICSE Workshops 2007).

[7]  Audris Mockus,et al.  Predicting risk of software changes , 2000, Bell Labs Technical Journal.

[8]  A.E. Hassan,et al.  The road ahead for Mining Software Repositories , 2008, 2008 Frontiers of Software Maintenance.

[9]  Ahmed E. Hassan,et al.  Prioritizing Unit Test Creation for Test-Driven Maintenance of Legacy Systems , 2010, 2010 10th International Conference on Quality Software.

[10]  Tze-Jie Yu,et al.  An Analysis of Several Software Defect Models , 1988, IEEE Trans. Software Eng..

[11]  L. Erlikh,et al.  Leveraging legacy system dollars for e-business , 2000 .

[12]  Harvey P. Siy,et al.  Predicting Fault Incidence Using Software Change History , 2000, IEEE Trans. Software Eng..

[13]  Yi Zhang,et al.  Classifying Software Changes: Clean or Buggy? , 2008, IEEE Transactions on Software Engineering.

[14]  U. Wagener,et al.  Maintaining a Competitive Edge , 1993 .

[15]  Nachiappan Nagappan,et al.  Using Software Dependencies and Churn Metrics to Predict Field Failures: An Empirical Case Study , 2007, First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007).

[16]  Andreas Zeller,et al.  When do changes induce fixes? , 2005, ACM SIGSOFT Softw. Eng. Notes.

[17]  Hausi A. Müller,et al.  Understanding software systems using reverse engineering technology perspectives from the Rigi project , 1993, CASCON.

[18]  Rainer Koschke,et al.  Revisiting the evaluation of defect prediction models , 2009, PROMISE '09.