Release Readiness Classification: An Explorative Case Study

Context: To survive in a highly competitive software market, product managers are striving for frequent, incremental releases in ever shorter cycles. Release decisions are characterized by high complexity and have a high impact on project success. Under such conditions, using the experience from past releases could help product managers to take more informed decisions. Goal and research objectives: To make decisions about when to make a release more operational, we formulated release readiness (RR) as a binary classification problem. The goal of our research presented in this paper is twofold: (i) to propose a machine learning approach called RC* (Release readiness Classification applying predictive techniques) with two approaches for defining the training set called incremental and sliding window, and (ii) to empirically evaluate the applicability of RC* for varying project characteristics. Methodology: In the form of explorative case study research, we applied the RC* method to four OSS projects under the Apache Software Foundation. We retrospectively covered a period of 82 months, 90 releases and 3722 issues. We use Random Forest as the classification technique along with eight independent variables to classify release readiness in individual weeks. Predictive performance was measured in terms of precision, recall, F-measure, and accuracy. Results: The incremental and sliding window approaches respectively achieve an overall 76% and 79% accuracy in classifying RR for four analyzed projects. Incremental approach outperforms sliding window approach in terms of stability of the predictive performance. Predictive performance for both approaches are significantly influenced by three project characteristics i) release duration, ii) number of issues in a release, iii) size of the initial training dataset. Conclusion: As our initial observation we identified, incremental approach achieves higher accuracy when releases have long duration, low number of issues and classifiers are trained with large training set. On the other hand, sliding window approach achieves higher accuracy when releases have short duration and classifiers are trained with small training set.

[1]  Paweł Cichosz,et al.  Data Mining Algorithms: Explained Using R , 2015 .

[2]  Ayse Basar Bener,et al.  An industrial case study of classifier ensembles for locating software defects , 2011, Software Quality Journal.

[3]  Philipp Brune,et al.  Determining Software Product Release Readiness by the Change-Error Correlation Function: On the Importance of the Change-Error Time Lag , 2012, 2012 45th Hawaii International Conference on System Sciences.

[4]  R. Brettschneider,et al.  Is your software ready for release? , 1989, IEEE Software.

[5]  Vladimir Tosic,et al.  Guidelines for industrially-based multiple case studies in software engineering , 2009, 2009 Third International Conference on Research Challenges in Information Science.

[6]  Günther Ruhe,et al.  RELREA - An Analytical Approach for Evaluating Release Readiness , 2014, SEKE.

[7]  Dietmar Pfahl,et al.  Monitoring and Controlling Release Readiness by Learning Across Projects , 2016 .

[8]  Mie Mie Thet Thwin Estimating software readiness using predictive models , 2006 .

[9]  Dietmar Pfahl,et al.  Comparative Analysis of Predictive Techniques for Release Readiness Classification , 2016, 2016 IEEE/ACM 5th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE).

[10]  J.T.S. Quah,et al.  Gauzing software readiness using metrics , 2008, 2008 IEEE Conference on Soft Computing in Industrial Applications.

[11]  Miroslaw Staron,et al.  Release Readiness Indicator for Mature Agile and Lean Software Development Projects , 2012, XP.

[12]  Xin Yao,et al.  journal homepage: www.elsevier.com/locate/infsof Ensembles and locality: Insight on improving software effort estimation , 2022 .

[13]  Paul W. Oman,et al.  Using metrics to manage the end-game of a software project , 1999, Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403).

[14]  Steve McConnell,et al.  Best Practices: Gauging Software Readiness with Defect Tracking , 1997, IEEE Softw..