Detecting Performance Bottlenecks Guided by Resource Usage

Detecting performance bottlenecks is critical to fix software performance issues. A great part of performance bottlenecks are related to resource usages, which can be affected by configurations. To detect configuration-related performance bottlenecks, the existing works either use learning methods to model the relationships between performance and configurations, or use profiling methods to monitor the execution time. The learning methods are time-consuming when analyzing software with large amounts of configurations, while the profiling methods can incur excessive overheads. In this paper, we conduct empirical studies on configurations, performance and resources. We find that 1) 49% performance issues can be improved or fixed by configurations; 2) 71% configurations affect the performance by tuning resource usage in a simple way; and 3) four types of resources contribute the main causes of performance issues. Inspired by these findings, we design PBHunter, a resource-guided instrumentation tool to detect configuration-related performance bottlenecks. PBHunter ranks configurations by resource usage and selects the ones that heavily affect resource usages. Guided by selected configurations, PBHunter applies the code instrumentation technique in resource-related code snippets. The evaluation shows PBHunter can effectively (36/50) expose the culprits of performance issues with minor overheads (5.1% on average).

[1]  Mona Attariyan,et al.  X-ray: Automating Root-Cause Diagnosis of Performance Anomalies in Production Software , 2012, OSDI.

[2]  Lorenzo Keller,et al.  ConfErr: A tool for assessing resilience to human configuration errors , 2008, 2008 IEEE International Conference on Dependable Systems and Networks With FTCS and DCC (DSN).

[3]  Shan Lu,et al.  Understanding and detecting real-world performance bugs , 2012, PLDI.

[4]  Sven Apel,et al.  A Comparison of 10 Sampling Algorithms for Configurable Systems , 2016, 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE).

[5]  Dongmei Zhang,et al.  Performance debugging in the large via mining millions of stack traces , 2012, 2012 34th International Conference on Software Engineering (ICSE).

[6]  Chen Fu,et al.  Automatically finding performance problems with feedback-directed learning software testing , 2012, 2012 34th International Conference on Software Engineering (ICSE).

[7]  Gunter Saake,et al.  Predicting performance via automated feature-interaction detection , 2012, 2012 34th International Conference on Software Engineering (ICSE).

[8]  Tingting Yu,et al.  An Empirical Study on Performance Bugs for Highly Configurable Software Systems , 2016, ESEM.

[9]  Mona Attariyan,et al.  Automating Configuration Troubleshooting with Dynamic Information Flow Analysis , 2010, OSDI.

[10]  Sven Apel,et al.  Performance-influence models for highly configurable systems , 2015, ESEC/SIGSOFT FSE.

[11]  Qi Luo,et al.  Automating performance bottleneck detection using search-based application profiling , 2015, ISSTA.

[12]  Sven Apel,et al.  Cost-Efficient Sampling for Performance Prediction of Configurable Systems (T) , 2015, 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE).

[13]  Shan Lu,et al.  Performance Diagnosis for Inefficient Loops , 2017, 2017 IEEE/ACM 39th International Conference on Software Engineering (ICSE).

[14]  Tianyin Xu,et al.  EnCore: exploiting system environment and correlation information for misconfiguration detection , 2014, ASPLOS.

[15]  Sven Apel,et al.  Data-efficient performance learning for configurable systems , 2018, Empirical Software Engineering.

[16]  Shan Lu,et al.  Toddler: Detecting performance problems via similar memory-access patterns , 2013, 2013 35th International Conference on Software Engineering (ICSE).

[17]  Shu Wang,et al.  Understanding and Auto-Adjusting Performance-Sensitive Configurations , 2018, ASPLOS.

[18]  Junfeng Yang,et al.  Context-based Online Configuration-Error Detection , 2011, USENIX Annual Technical Conference.

[19]  Shan Lu,et al.  CARAMEL: Detecting and Fixing Performance Problems That Have Non-Intrusive Fixes , 2015, 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering.

[20]  Christian Kästner,et al.  Transfer Learning for Improving Model Predictions in Highly Configurable Software , 2017, 2017 IEEE/ACM 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS).

[21]  Andreas Zeller,et al.  Mining behavior models from enterprise web applications , 2013, ESEC/FSE 2013.

[22]  Trishul M. Chilimbi,et al.  HOLMES: Effective statistical debugging via efficient path profiling , 2009, 2009 IEEE 31st International Conference on Software Engineering.

[23]  Alvin Cheung,et al.  Understanding Database Performance Inefficiencies in Real-world Web Applications , 2017, CIKM.

[24]  Sven Apel,et al.  Variability-aware performance prediction: A statistical learning approach , 2013, 2013 28th IEEE/ACM International Conference on Automated Software Engineering (ASE).

[25]  Dan Meng,et al.  Automatic performance debugging of SPMD-style parallel programs , 2011, J. Parallel Distributed Comput..

[26]  Sven Apel,et al.  Using bad learners to find good configurations , 2017, ESEC/SIGSOFT FSE.