Methods and Tools for Focusing and Prioritizing the Testing Effort

Software testing is essential for any software development process, representing an extremely expensive activity. Despite its importance recent studies showed that developers rarely test their application and most programming sessions end without any test execution. Indeed, new methods and tools able to better allocating the developers effort are needed to increment the system reliability and to reduce the testing costs. In this work we focus on three activities able to optimize testing activities, specifically, bug prediction, test case prioritization, and energy leaks detection. Indeed, despite the effort devoted in the last decades by the research community led to interesting results, we highlight some aspects that might be improved and propose empirical investigations and novel approaches. Finally, we provide a set of open issues that should be addressed by the research community in the future.

[1]  Witold Pedrycz,et al.  A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction , 2008, 2008 ACM/IEEE 30th International Conference on Software Engineering.

[2]  Andrea De Lucia,et al.  Hypervolume-Based Search for Test Case Prioritization , 2015, SSBSE.

[3]  Ruchika Malhotra,et al.  A systematic review of machine learning techniques for software fault prediction , 2015, Appl. Soft Comput..

[4]  Boris Beizer,et al.  Software Testing Techniques , 1983 .

[5]  Harald C. Gall,et al.  Cross-project defect prediction: a large scale experiment on data vs. domain vs. process , 2009, ESEC/SIGSOFT FSE.

[6]  Tracy Hall,et al.  Developing Fault-Prediction Models: What the Research Can Show Industry , 2011, IEEE Software.

[7]  Michele Lanza,et al.  Evaluating defect prediction approaches: a benchmark and an extensive comparison , 2011, Empirical Software Engineering.

[8]  Uwe Aßmann,et al.  A Tool-Supported Quality Smell Catalogue For Android Developers , 2014, Softwaretechnik-Trends.

[9]  Ayse Basar Bener,et al.  On the relative value of cross-company and within-company data for defect prediction , 2009, Empirical Software Engineering.

[10]  Mark Harman,et al.  Pareto efficient multi-objective test case selection , 2007, ISSTA '07.

[11]  Mark Harman,et al.  Empirical evaluation of pareto efficient multi-objective regression test case prioritisation , 2015, ISSTA.

[12]  Shane McIntosh,et al.  Revisiting the Impact of Classification Techniques on the Performance of Defect Prediction Models , 2015, 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering.

[13]  Mark Harman,et al.  Search Algorithms for Regression Test Case Prioritization , 2007, IEEE Transactions on Software Engineering.

[14]  Elaine J. Weyuker,et al.  The limited impact of individual developer data on software defect prediction , 2011, Empirical Software Engineering.

[15]  Andrea De Lucia,et al.  Cross-project defect prediction models: L'Union fait la force , 2014, 2014 Software Evolution Week - IEEE Conference on Software Maintenance, Reengineering, and Reverse Engineering (CSMR-WCRE).

[16]  Bruce Christianson,et al.  Building an Ensemble for Software Defect Prediction Based on Diversity Selection , 2016, ESEM.

[17]  Andrea De Lucia,et al.  Lightweight detection of Android-specific code smells: The aDoctor project , 2017, 2017 IEEE 24th International Conference on Software Analysis, Evolution and Reengineering (SANER).

[18]  Jouni Lampinen,et al.  GDE3: the third evolution step of generalized differential evolution , 2005, 2005 IEEE Congress on Evolutionary Computation.

[19]  Lech Madeyski,et al.  Towards identifying software project clusters with regard to defect prediction , 2010, PROMISE '10.

[20]  Qingfu Zhang,et al.  Multiobjective Optimization Problems With Complicated Pareto Sets, MOEA/D and NSGA-II , 2009, IEEE Transactions on Evolutionary Computation.

[21]  David Lo,et al.  An Empirical Study of Classifier Combination for Cross-Project Defect Prediction , 2015, 2015 IEEE 39th Annual Computer Software and Applications Conference.

[22]  Hoh Peter In,et al.  Developer Micro Interaction Metrics for Software Defect Prediction , 2016, IEEE Transactions on Software Engineering.

[23]  Jesús M. González-Barahona,et al.  Reproducibility and credibility in empirical software engineering: A case study based on a systematic literature review of the use of the SZZ algorithm , 2018, Inf. Softw. Technol..

[24]  Andrea De Lucia,et al.  PETrA: A Software-Based Tool for Estimating the Energy Profile of Android Applications , 2017, 2017 IEEE/ACM 39th International Conference on Software Engineering Companion (ICSE-C).

[25]  Andrea De Lucia,et al.  Software-based energy profiling of Android apps: Simple, efficient and reliable? , 2017, 2017 IEEE 24th International Conference on Software Analysis, Evolution and Reengineering (SANER).

[26]  Andrea De Lucia,et al.  Dynamic Selection of Classifiers in Bug Prediction: An Adaptive Method , 2017, IEEE Transactions on Emerging Topics in Computational Intelligence.

[27]  Shane McIntosh,et al.  Are Fix-Inducing Changes a Moving Target? A Longitudinal Case Study of Just-In-Time Defect Prediction , 2018, IEEE Transactions on Software Engineering.

[28]  Anne Auger,et al.  Theory of the hypervolume indicator: optimal μ-distributions and the choice of the reference point , 2009, FOGA '09.

[29]  Shane McIntosh,et al.  Automated Parameter Optimization of Classification Techniques for Defect Prediction Models , 2016, 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE).

[30]  Romain Rouvoy,et al.  An Empirical Study of the Performance Impacts of Android Code Smells , 2016, 2016 IEEE/ACM International Conference on Mobile Software Engineering and Systems (MOBILESoft).

[31]  Abram Hindle,et al.  Green mining: a methodology of relating software change and configuration to power consumption , 2013, Empirical Software Engineering.

[32]  Jens Grabowski,et al.  A Comparative Study to Benchmark Cross-Project Defect Prediction Approaches , 2018, IEEE Transactions on Software Engineering.

[33]  Abram Hindle,et al.  What Do Programmers Know about Software Energy Consumption? , 2016, IEEE Software.

[34]  Mark Harman,et al.  Regression testing minimization, selection and prioritization: a survey , 2012, Softw. Test. Verification Reliab..

[35]  Forrest Shull,et al.  Local versus Global Lessons for Defect Prediction and Effort Estimation , 2013, IEEE Transactions on Software Engineering.

[36]  Ahmed E. Hassan,et al.  Predicting faults using the complexity of code changes , 2009, 2009 IEEE 31st International Conference on Software Engineering.

[37]  Michele Lanza,et al.  The Tragedy of Defect Prediction, Prince of Empirical Software Engineering Research , 2016, IEEE Softw..

[38]  Gabriele Bavota,et al.  On the role of developer's scattered changes in bug prediction , 2015, 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME).

[39]  Qinbao Song,et al.  Data Quality: Some Comments on the NASA Software Defect Datasets , 2013, IEEE Transactions on Software Engineering.

[40]  Gabriele Bavota,et al.  A Developer Centered Bug Prediction Model , 2018, IEEE Transactions on Software Engineering.

[41]  Georgios Gousios,et al.  Developer Testing in the IDE: Patterns, Beliefs, and Behavior , 2019, IEEE Trans. Software Eng..

[42]  Gregg Rothermel,et al.  Prioritizing test cases for regression testing , 2000, ISSTA '00.