Systematic execution of Android test suites in adverse conditions

Event-driven applications, such as, mobile apps, are difficult to test thoroughly. The application programmers often put significant effort into writing end-to-end test suites. Even though such tests often have high coverage of the source code, we find that they often focus on the expected behavior, not on occurrences of unusual events. On the other hand, automated testing tools may be capable of exploring the state space more systematically, but this is mostly without knowledge of the intended behavior of the individual applications. As a consequence, many programming errors remain unnoticed until they are encountered by the users. We propose a new methodology for testing by leveraging existing test suites such that each test case is systematically exposed to adverse conditions where certain unexpected events may interfere with the execution. In this way, we explore the interesting execution paths and take advantage of the assertions in the manually written test suite, while ensuring that the injected events do not affect the expected outcome. The main challenge that we address is how to accomplish this systematically and efficiently. We have evaluated the approach by implementing a tool, Thor, working on Android. The results on four real-world apps with existing test suites demonstrate that apps are often fragile with respect to certain unexpected events and that our methodology effectively increases the testing quality: Of 507 individual tests, 429 fail when exposed to adverse conditions, which reveals 66 distinct problems that are not detected by ordinary execution of the tests.

[1]  Sarfraz Khurshid,et al.  Automated Generation of Oracles for Testing User-Interaction Features of Mobile Apps , 2014, 2014 IEEE Seventh International Conference on Software Testing, Verification and Validation.

[2]  Hongseok Yang,et al.  Automated concolic testing of smartphone apps , 2012, SIGSOFT FSE.

[3]  Amin Milani Fard,et al.  Feedback-directed exploration of web applications to derive test models , 2013, 2013 IEEE 24th International Symposium on Software Reliability Engineering (ISSRE).

[4]  Atanas Rountev,et al.  Systematic testing for resource leaks in Android applications , 2013, 2013 IEEE 24th International Symposium on Software Reliability Engineering (ISSRE).

[5]  Andreas Zeller,et al.  Generating parameterized unit tests , 2011, ISSTA '11.

[6]  Iulian Neamtiu,et al.  Automating GUI testing for Android applications , 2011, AST '11.

[7]  Arie van Deursen,et al.  Invariant-Based Automatic Testing of Modern Web Applications , 2012, IEEE Transactions on Software Engineering.

[8]  Daniel Jackson,et al.  Finding bugs with a constraint solver , 2000, ISSTA '00.

[9]  Satish Narayanasamy,et al.  Race detection for event-driven mobile applications , 2014, PLDI.

[10]  Junfeng Yang,et al.  Efficiently, effectively detecting mobile app bugs with AppDoctor , 2014, EuroSys '14.

[11]  David L. Dowe,et al.  Two decades of Web application testing - A survey of recent advances , 2014, Inf. Syst..

[12]  Rupak Majumdar,et al.  Race detection for Android applications , 2014, PLDI.

[13]  Porfirio Tramontana,et al.  MobiGUITAR: Automated Model-Based Testing of Mobile Apps , 2015, IEEE Software.

[14]  Suman Nath,et al.  Automatic and scalable fault detection for mobile applications , 2014, MobiSys.

[15]  Zhendong Su,et al.  Compiler validation via equivalence modulo inputs , 2014, PLDI.

[16]  Darko Marinov,et al.  An empirical analysis of flaky tests , 2014, SIGSOFT FSE.

[17]  Atanas Rountev,et al.  Testing for poor responsiveness in android applications , 2013, 2013 1st International Workshop on the Engineering of Mobile-Enabled Systems (MOBS).

[18]  Suman Nath,et al.  PUMA: programmable UI-automation for large-scale dynamic analysis of mobile apps , 2014, MobiSys.

[19]  K. Yi,et al.  Static Analyzer for Detecting Privacy Leaks in Android Applications , 2012 .

[20]  Jacques Klein,et al.  FlowDroid: precise context, flow, field, object-sensitive and lifecycle-aware taint analysis for Android apps , 2014, PLDI.

[21]  Ranveer Chandra,et al.  Caiipa: automated large-scale mobile app testing through contextual fuzzing , 2014, MobiCom.

[22]  Mukul R. Prasad,et al.  Automated testing with targeted event sequence generation , 2013, ISSTA.

[23]  David Notkin,et al.  Tool-assisted unit-test generation and selection based on operational abstractions , 2006, Automated Software Engineering.

[24]  Andreas Zeller,et al.  Simplifying and Isolating Failure-Inducing Input , 2002, IEEE Trans. Software Eng..

[25]  Mayur Naik,et al.  Dynodroid: an input generation system for Android apps , 2013, ESEC/FSE 2013.

[26]  Porfirio Tramontana,et al.  MobiGUITAR - A Tool for Automated Model-Based Testing of Mobile Apps , 2014 .

[27]  Amin Milani Fard,et al.  Leveraging existing tests in automated test generation for web applications , 2014, ASE.

[28]  Sebastian G. Elbaum,et al.  Amplifying Tests to Validate Exception Handling Code: An Extended Study in the Mobile Application Domain , 2014, TSEM.

[29]  Iulian Neamtiu,et al.  Targeted and depth-first exploration for systematic testing of android apps , 2013, OOPSLA.

[30]  George C. Necula,et al.  Guided GUI testing of android apps with minimal restart and approximate learning , 2013, OOPSLA.