SunCat: helping developers understand and predict performance problems in smartphone applications

The number of smartphones shipped in 2014 will be four times larger than the number of PCs. Compared to PCs, smartphones have limited computing resources, and smartphone applications are more prone to performance problems. Traditionally, developers use profilers to detect performance problems by running applications with relatively large inputs. Unfortunately, for smartphone applications, the developer cannot easily control the input, because smartphone applications interact heavily with the environment. Given a run on a small input, how can a developer detect performance problems that would occur for a run with large input? We present SUNCAT, a novel technique that helps developers understand and predict performance problems in smartphone applications. The developer runs the application using a common input, typically small, and SUNCAT presents a prioritized list of repetition patterns that summarize the current run plus additional information to help the developer understand how these patterns may grow in the future runs with large inputs. We implemented SUNCAT for Windows Phone systems and used it to understand the performance characteristics of 29 usage scenarios in 5 popular applications. We found one performance problem that was confirmed and fixed, four problems that were confirmed, one confirmed problem that was a duplicate of an older report, and three more potential performance problems that developers agree may be improved.

[1]  Michael D. Bond,et al.  Continuous path and edge profiling , 2005, 38th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO'05).

[2]  Abhi Shelat,et al.  Approximation algorithms for grammar-based compression , 2002, SODA '02.

[3]  Matthias Hauswirth,et al.  Vertical profiling: understanding the behavior of object-priented applications , 2004, OOPSLA.

[4]  Yunheung Paek,et al.  Mantis: Automatic Performance Prediction for Smartphone Applications , 2013, USENIX Annual Technical Conference.

[5]  Ian H. Witten,et al.  Linear-time, incremental hierarchy inference for compression , 1997, Proceedings DCC '97. Data Compression Conference.

[6]  Rudolf Eigenmann,et al.  Compiler Infrastructure , 2013, International Journal of Parallel Programming.

[7]  Camil Demetrescu,et al.  Input-Sensitive Profiling , 2012, IEEE Transactions on Software Engineering.

[8]  Matthias Hauswirth,et al.  Algorithmic profiling , 2012, PLDI.

[9]  Jong-Deok Choi,et al.  Finding and Removing Performance Bottlenecks in Large Systems , 2004, ECOOP.

[10]  Shan Lu,et al.  Toddler: Detecting performance problems via similar memory-access patterns , 2013, 2013 35th International Conference on Software Engineering (ICSE).

[11]  Helen J. Wang,et al.  Strider: a black-box, state-based approach to change and configuration management and support , 2003, Sci. Comput. Program..

[12]  Ahmed E. Hassan,et al.  Automatic detection of performance deviations in the load testing of Large Scale Systems , 2013, 2013 35th International Conference on Software Engineering (ICSE).

[13]  Erik R. Altman,et al.  Performance analysis of idle programs , 2010, OOPSLA.

[14]  Matthias Hauswirth,et al.  Evaluating the accuracy of Java profilers , 2010, PLDI '10.

[15]  Xiangyu Zhang,et al.  Whole Execution Traces , 2004, 37th International Symposium on Microarchitecture (MICRO-37'04).

[16]  Camil Demetrescu,et al.  Mining hot calling contexts in small space , 2011, PLDI '11.

[17]  Nathan R. Tallent,et al.  Analyzing lock contention in multithreaded applications , 2010, PPoPP '10.

[18]  Kavitha Srinivas,et al.  Summarizing application performance from a components perspective , 2005, ESEC/FSE-13.

[19]  Shan Lu,et al.  Understanding and detecting real-world performance bugs , 2012, PLDI.

[20]  Mira Mezini,et al.  Taming reflection: Aiding static analysis in the presence of reflection and custom class loaders , 2011, 2011 33rd International Conference on Software Engineering (ICSE).

[21]  Nathan R. Tallent,et al.  Effective performance measurement and analysis of multithreaded applications , 2009, PPoPP '09.

[22]  Ahmed E. Hassan,et al.  An industrial case study of automatically identifying performance regression-causes , 2014, MSR 2014.

[23]  Alessandro Orso,et al.  Are automated debugging techniques actually helping programmers? , 2011, ISSTA '11.

[24]  Helen J. Wang,et al.  Automatic Misconfiguration Troubleshooting with PeerPressure , 2004, OSDI.

[25]  Guoqing Xu,et al.  Cachetor: detecting cacheable data to remove bloat , 2013, ESEC/FSE 2013.

[26]  J. Larus Whole program paths , 1999, PLDI '99.

[27]  Matthias Hauswirth,et al.  Catch me if you can: performance bug detection in the wild , 2011, OOPSLA '11.

[28]  Amer Diwan,et al.  Inferred call path profiling , 2009, OOPSLA '09.

[29]  Trishul M. Chilimbi,et al.  Preferential path profiling: compactly numbering interesting paths , 2007, POPL '07.

[30]  Dongmei Zhang,et al.  Performance debugging in the large via mining millions of stack traces , 2012, 2012 34th International Conference on Software Engineering (ICSE).

[31]  Atanas Rountev,et al.  Uncovering performance problems in Java applications with reference propagation profiling , 2012, 2012 34th International Conference on Software Engineering (ICSE).

[32]  En-Hui Yang,et al.  Grammar-based codes: A new class of universal lossless source codes , 2000, IEEE Trans. Inf. Theory.

[33]  Qiang Xu,et al.  Efficient Discovery of Loop Nests in Communication Traces of Parallel Programs , 2008 .