Productivity of Software Enhancement Projects: an Empirical Study

Background. Having a correct, although approximate, knowledge of software development productivity is clearly important. In some environments, the belief that software enhancement projects are characterized by higher productivity than new software development has emerged. Aim. We want to understand whether the mentioned belief is rooted on solid bases or is due to some cognitive biases. Method. An empirical study was performed, analyzing the data from a large dataset that collects data from real-life projects. Several statistical methods were used to evaluate the unitary cost (i.e., the cost per Function Point) of enhancement projects and new developments. Results. Our analyses show that–contrary to some popular beliefs–software enhancement costs more than new software development, at least for projects greater than 300 Function Points. Conclusions. Project managers and other stakeholders interested in the actual cost of software should reject ill-based evaluations that the productivity of software enhancement is greater than new software development. More generally, objective evaluations based on the analysis of representative data should be preferred to evaluations affected by cognitive biases.

[1]  Mohammad Azzeh,et al.  Analogy-based effort estimation: a new method to discover set of analogies from dataset characteristics , 2015, IET Softw..

[2]  Alain Abran,et al.  Reliability of function points productivity model for enhancement projects (A field study) , 1993, 1993 Conference on Software Maintenance.

[3]  Emilia Mendes,et al.  A replicated assessment of the use of adaptation rules to improve Web cost estimation , 2003, 2003 International Symposium on Empirical Software Engineering, 2003. ISESE 2003. Proceedings..

[4]  Paul Ralph,et al.  Cognitive Biases in Software Engineering: A Systematic Mapping Study , 2017, IEEE Transactions on Software Engineering.

[5]  Barry W. Boehm,et al.  Software Engineering Economics , 1993, IEEE Transactions on Software Engineering.

[6]  Stephen G. MacDonell,et al.  Evaluating prediction systems in software project estimation , 2012, Inf. Softw. Technol..

[7]  Barbara A. Kitchenham,et al.  Effort estimation using analogy , 1996, Proceedings of IEEE 18th International Conference on Software Engineering.

[8]  F. Wilcoxon Individual Comparisons by Ranking Methods , 1945 .

[9]  Mohammad Azzeh,et al.  Value of ranked voting methods for estimation by analogy , 2013, IET Softw..

[10]  Katrina D. Maxwell,et al.  Applied Statistics for Software Managers , 2002 .

[11]  Sandro Morasca,et al.  On the Evaluation of Effort Estimation Models , 2017, EASE.

[12]  Rogério F. da Silva,et al.  The Challenge of Evaluating Virtual Communities of Practice: A Systematic Mapping Study , 2020, Interdisciplinary Journal of Information, Knowledge, and Management.

[13]  Emilia Mendes,et al.  Further comparison of cross-company and within-company effort estimation models for Web applications , 2004, 10th International Symposium on Software Metrics, 2004. Proceedings..

[14]  Ellis Horowitz,et al.  Software Cost Estimation with COCOMO II , 2000 .

[15]  D. Humphries,et al.  Reckoning with risk: learning to live with uncertainty , 2003, British Journal of Sports Medicine.

[16]  Filomena Ferrucci,et al.  Assessing the effectiveness of approximate functional sizing approaches for effort estimation , 2020, Inf. Softw. Technol..