A qualitative study on performance bugs

Software performance is one of the important qualities that makes software stand out in a competitive market. However, in earlier work we found that performance bugs take more time to fix, need to be fixed by more experienced developers and require changes to more code than non-performance bugs. In order to be able to improve the resolution of performance bugs, a better understanding is needed of the current practice and shortcomings of reporting, reproducing, tracking and fixing performance bugs. This paper qualitatively studies a random sample of 400 performance and non-performance bug reports of Mozilla Firefox and Google Chrome across four dimensions (Impact, Context, Fix and Fix validation). We found that developers and users face problems in reproducing performance bugs and have to spend more time discussing performance bugs than other kinds of bugs. Sometimes performance regressions are tolerated as a tradeoff to improve something else.

[1]  Christian Robottom Reis,et al.  An Overview of the Software Engineering Process and Tools in the Mozilla Project , 2002 .

[2]  Adrian Schröter MSR Challenge 2011: Eclipse, Netbeans, Firefox, and Chrome , 2011, MSR.

[3]  Graham Kalton,et al.  Introduction to Survey Sampling , 1983 .

[4]  Henry H. Liu,et al.  Software Performance and Scalability - A Quantitative Approach , 2009, Wiley series on quantitative software engineering.

[5]  Parmit K. Chilana,et al.  Design, discussion, and dissent in open bug reports , 2011, iConference '11.

[6]  Amiram Yehudai,et al.  Locating Regression Bugs , 2007, Haifa Verification Conference.

[7]  Dane Bertram,et al.  Communication, collaboration, and bugs: the social nature of issue tracking in small, collocated teams , 2010, CSCW '10.

[8]  Audris Mockus,et al.  Software Dependencies, Work Dependencies, and Their Impact on Failures , 2009, IEEE Transactions on Software Engineering.

[9]  Ahmed E. Hassan,et al.  Security versus performance bugs: a case study on Firefox , 2011, MSR '11.

[10]  Mladen A. Vouk,et al.  An empirical study of security problem reports in Linux distributions , 2009, 2009 3rd International Symposium on Empirical Software Engineering and Measurement.

[11]  Paola Inverardi,et al.  Model-based performance prediction in software development: a survey , 2004, IEEE Transactions on Software Engineering.

[12]  Jan O. Borchers,et al.  Me hates this: exploring different levels of user feedback for (usability) bug reporting , 2011, CHI EA '11.

[13]  Witold Pedrycz,et al.  A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction , 2008, 2008 ACM/IEEE 30th International Conference on Software Engineering.

[14]  D. T. Lee,et al.  Securing web application code by static analysis and runtime protection , 2004, WWW '04.

[15]  Chanchal Kumar Roy,et al.  Useful, But Usable? Factors Affecting the Usability of APIs , 2011, 2011 18th Working Conference on Reverse Engineering.

[16]  Harvey P. Siy,et al.  Predicting Fault Incidence Using Software Change History , 2000, IEEE Trans. Software Eng..

[17]  Henry H. Liu,et al.  Software Performance and Scalability , 2009 .

[18]  Chauncey E. Wilson,et al.  The whiteboard: Tracking usability issues: to bug or not to bug? , 2001, INTR.

[19]  Elizabeth A. Buie Whiteboard: online shopping: or, how I saved a trip to the store and received my items in just 47 fun-filled days , 2001, INTR.

[20]  Philip J. Guo,et al.  "Not my bug!" and other reasons for software bug report reassignments , 2011, CSCW.

[21]  Sunghun Kim,et al.  How long did it take to fix bugs? , 2006, MSR '06.

[22]  Matthias Hauswirth,et al.  Catch me if you can: performance bug detection in the wild , 2011, OOPSLA '11.

[23]  Thomas Zimmermann,et al.  Quality of bug reports in Eclipse , 2007, eclipse '07.

[24]  Thomas Zimmermann,et al.  What Makes a Good Bug Report? , 2008, IEEE Transactions on Software Engineering.