Understanding Software Performance Challenges an Empirical Study on Stack Overflow

Performance is a quality aspect describing how the software is performing. Any performance degradation will further affect other quality aspects, such as usability. Software developers continuously conduct testing to ensure that code addition or changes do not damage existing functionalities or negatively affect the quality. Hence, developers set strategies to detect, locate and fix the regression if needed. In this paper, we provide an exploratory study on the challenges developers face in resolving performance regression. The study is based on the questions posted on a technical forum directed to performance regression. We collected 1828 questions discussing the regression of software execution time. All those questions are manually analyzed. The study resulted in a categorization of the challenges. We also discussed the difficulty level of performance regression issues within the developers community. This study provides insights to help developers during the software design and implementation to avoid regression causes.

[1]  Weiyi Shang,et al.  Locating Performance Regression Root Causes in the Field Operations of Web-Based Systems: An Experience Report , 2022, IEEE Transactions on Software Engineering.

[2]  Mohamed Wiem Mkaouer,et al.  On the Detection of Performance Regression Introducing Code Changes: Experience from the Git Project , 2022, 2022 IEEE 29th Annual Software Technology Conference (STC).

[3]  Travis J. Desell,et al.  Search-based detection of code changes introducing performance regression , 2022, Swarm Evol. Comput..

[4]  Emad Shihab,et al.  PerfJIT: Test-Level Just-in-Time Prediction for Performance Regression Introducing Commits , 2022, IEEE Transactions on Software Engineering.

[5]  Mohamed Wiem Mkaouer,et al.  Learning to characterize performance regression introducing code changes , 2022, SAC.

[6]  W. Meng,et al.  Understanding and Detecting Performance Bugs in Markdown Compilers , 2021, 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE).

[7]  Ali Ouni,et al.  How do i refactor this? An empirical study on refactoring trends and topics in Stack Overflow , 2021, Empirical Software Engineering.

[8]  F. Khomh,et al.  Understanding Quantum Software Engineering Challenges An Empirical Study on Stack Exchange Forums and GitHub Issues , 2021, 2021 IEEE International Conference on Software Maintenance and Evolution (ICSME).

[9]  Pasquale Salza,et al.  Predicting unstable software benchmarks using static source code features , 2021, Empirical Software Engineering.

[10]  Rrezarta Krasniqi,et al.  Analyzing and Detecting Emerging Quality-Related Concerns across OSS Defect Report Summaries , 2021, 2021 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER).

[11]  Haoxiang Zhang,et al.  An Empirical Study of Obsolete Answers on Stack Overflow , 2019, IEEE Transactions on Software Engineering.

[12]  Foutse Khomh,et al.  Analysis of Modern Release Engineering Topics : – A Large-Scale Study using StackOverflow – , 2020, 2020 IEEE International Conference on Software Maintenance and Evolution (ICSME).

[13]  Marco Tulio Valente,et al.  Prioritizing versions for performance regression testing: The Pharo case , 2020, Sci. Comput. Program..

[14]  Muhammad Ali Babar,et al.  Mining Questions Asked about Continuous Software Engineering: A Case Study of Stack Overflow , 2020, EASE.

[15]  Mohamed Wiem Mkaouer,et al.  How Does Library Migration Impact Software Quality and Comprehension? An Empirical Study , 2020, ICSR.

[16]  Mohamed Wiem Mkaouer,et al.  An Exploratory Study on How Software Reuse is Discussed in Stack Overflow , 2020, ICSR.

[17]  Hitesh Sapkota,et al.  Why is Developing Machine Learning Applications Challenging? A Study on Stack Overflow Posts , 2019, 2019 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM).

[18]  Mohamed Wiem Mkaouer,et al.  PRICE: Detection of Performance Regression Introducing Code Changes Using Static and Dynamic Metrics , 2019, SSBSE.

[19]  Massimiliano Di Penta,et al.  What kind of questions do developers ask on Stack Overflow? A comparison of automated approaches to classify posts into question categories , 2019, Empirical Software Engineering.

[20]  Mehdi Bagherzadeh,et al.  What do concurrency developers ask about?: a large-scale study using stack overflow , 2018, ESEM.

[21]  Carlos José Pereira de Lucena,et al.  Identifying Design Problems in the Source Code: A Grounded Theory , 2018, 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE).

[22]  Jinfu Chen,et al.  An Exploratory Study of Performance Regression Introducing Code Changes , 2017, 2017 IEEE International Conference on Software Maintenance and Evolution (ICSME).

[23]  Tao Xie,et al.  PerfRanker: prioritization of performance regression tests for collection-intensive software , 2017, ISSTA.

[24]  Matthias Hauswirth,et al.  Perphecy: Performance Regression Test Selection Made Simple but Effective , 2017, 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST).

[25]  Tingting Yu,et al.  An Empirical Study on Performance Bugs for Highly Configurable Software Systems , 2016, ESEM.

[26]  Xin Xia,et al.  What Security Questions Do Developers Ask? A Large-Scale Study of Stack Overflow Posts , 2016, Journal of Computer Science and Technology.

[27]  Paul Ralph,et al.  Grounded Theory in Software Engineering Research: A Critical Review and Guidelines , 2016, 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE).

[28]  Qi Luo,et al.  Mining Performance Regression Inducing Code Changes in Evolving Software , 2016, 2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR).

[29]  Christoph Treude,et al.  Augmenting API Documentation with Insights from Stack Overflow , 2016, 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE).

[30]  Marco Tulio Valente,et al.  Learning from Source Code History to Identify Performance Failures , 2016, ICPE.

[31]  Thomas R. Gross,et al.  Performance regression testing of concurrent classes , 2014, ISSTA 2014.

[32]  Xiao Ma,et al.  Performance regression testing target prioritization via performance risk analysis , 2014, ICSE.

[33]  Gregory W. Corder,et al.  Nonparametric Statistics : A Step-by-Step Approach , 2014 .

[34]  Alexander Serebrenik,et al.  StackOverflow and GitHub: Associations between Software Development and Crowdsourced Knowledge , 2013, 2013 International Conference on Social Computing.

[35]  Roozbeh Farahbod,et al.  Automated root cause isolation of performance regressions during software development , 2013, ICPE '13.

[36]  Stephen W. Thomas,et al.  What are developers talking about? An analysis of topics and trends in Stack Overflow , 2014, Empirical Software Engineering.

[37]  Shan Lu,et al.  Understanding and detecting real-world performance bugs , 2012, PLDI.

[38]  Ahmed E. Hassan,et al.  A qualitative study on performance bugs , 2012, 2012 9th IEEE Working Conference on Mining Software Repositories (MSR).

[39]  David J. Lilja,et al.  Techniques for obtaining high performance in Java programs , 2000, CSUR.

[40]  N. Hoffart Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory , 2000 .

[41]  Magnus C. Ohlsson,et al.  Experimentation in Software Engineering , 2000, The Kluwer International Series in Software Engineering.

[42]  C. Brodsky The Discovery of Grounded Theory: Strategies for Qualitative Research , 1968 .