An Empirical Study of Design Degradation: How Software Projects Get Worse over Time

Context: Software decay is a key concern for large, long-lived software projects. Systems degrade over time as design and implementation compromises and exceptions pile up. Goal: Quantify design decay and understand how software projects deal with this issue. Method: We conducted an empirical study on the presence and evolution of code smells, used as an indicator of design degradation in 220 open source projects. Results: The best approach to maintain the quality of a project is to spend time reducing both software defects (bugs) and design issues (refactoring). We found that design issues are frequently ignored in favor of fixing defects. We also found that design issues have a higher chance of being fixed in the early stages of a project, and that efforts to correct these stall as projects mature and the code base grows, leading to a build-up of problems. Conclusions: From studying a large set of open source projects, our research suggests that while core contributors tend to fix design issues more often than non-core contributors, there is no difference once the relative quantity of commits is accounted for. We also show that design issues tend to build up over time.

[1]  Ioannis Stamelos,et al.  An empirical investigation of an object-oriented design heuristic for maintainability , 2003, J. Syst. Softw..

[2]  C.J.H. Mann,et al.  Object-Oriented Metrics in Practice: Using Software Metrics to Characterize, Evaluate, and Improve the Design of Object-Oriented Systems , 2007 .

[3]  James M. Bieman,et al.  How Software Designs Decay: A Pilot Study of Pattern Evolution , 2007, First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007).

[4]  Dewayne E. Perry,et al.  Metrics and laws of software evolution-the nineties view , 1997, Proceedings Fourth International Software Metrics Symposium.

[5]  Jonathan I. Maletic,et al.  A survey and taxonomy of approaches for mining software repositories in the context of software evolution , 2007, J. Softw. Maintenance Res. Pract..

[6]  Lucas Batista Leite de Souza,et al.  Do software categories impact coupling metrics? , 2013, 2013 10th Working Conference on Mining Software Repositories (MSR).

[7]  Cristina Marinescu,et al.  iPlasma: An Integrated Platform for Quality Assessment of Object-Oriented Design , 2005, ICSM.

[8]  Yann-Gaël Guéhéneuc,et al.  Using FCA to Suggest Refactorings to Correct Design Defects , 2006, CLA.

[9]  Alexander Chatzigeorgiou,et al.  Investigating the Evolution of Bad Smells in Object-Oriented Code , 2010, 2010 Seventh International Conference on the Quality of Information and Communications Technology.

[10]  Marco Aurélio Gerosa,et al.  What can commit metadata tell us about design degradation? , 2013, IWPSE 2013.

[11]  Mauricio A. Saca Refactoring improving the design of existing code , 2017, 2017 IEEE 37th Central America and Panama Convention (CONCAPAN XXXVII).

[12]  Mika Mäntylä,et al.  A taxonomy and an initial empirical study of bad smells in code , 2003, International Conference on Software Maintenance, 2003. ICSM 2003. Proceedings..

[13]  Radu Marinescu,et al.  Detecting design flaws via metrics in object-oriented systems , 2001, Proceedings 39th International Conference and Exhibition on Technology of Object-Oriented Languages and Systems. TOOLS 39.

[14]  Mario Piattini,et al.  Mutation Testing , 2014, IEEE Software.

[15]  Audris Mockus,et al.  Quantifying the Effect of Code Smells on Maintenance Effort , 2013, IEEE Transactions on Software Engineering.

[16]  Francesca Arcelli Fontana,et al.  An Experience Report on Using Code Smells Detection Tools , 2011, 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops.

[17]  Connie U. Smith,et al.  Software performance antipatterns , 2000, WOSP '00.

[18]  Denys Poshyvanyk,et al.  Blending Conceptual and Evolutionary Couplings to Support Change Impact Analysis in Source Code , 2010, 2010 17th Working Conference on Reverse Engineering.

[19]  Ward Cunningham,et al.  The WyCash portfolio management system , 1992, OOPSLA '92.

[20]  Forrest Shull,et al.  Building empirical support for automated code smell detection , 2010, ESEM '10.

[21]  Radu Marinescu,et al.  Detection strategies: metrics-based rules for detecting design flaws , 2004, 20th IEEE International Conference on Software Maintenance, 2004. Proceedings..

[22]  Raed Shatnawi,et al.  An empirical study of the bad smells and class error probability in the post-release object-oriented system evolution , 2007, J. Syst. Softw..

[23]  Grady Booch,et al.  Object-Oriented Analysis and Design with Applications , 1990 .

[24]  Martin Fowler,et al.  Refactoring - Improving the Design of Existing Code , 1999, Addison Wesley object technology series.

[25]  Tracy Hall,et al.  Code Bad Smells: a review of current knowledge , 2011, J. Softw. Maintenance Res. Pract..

[26]  Foutse Khomh,et al.  An Exploratory Study of the Impact of Code Smells on Software Change-proneness , 2009, 2009 16th Working Conference on Reverse Engineering.

[27]  Clemente Izurieta,et al.  How Software Designs Decay: A Pilot Study of Pattern Evolution , 2007, ESEM 2007.

[28]  James M. Bieman,et al.  Testing Consequences of Grime Buildup in Object Oriented Design Patterns , 2008, 2008 1st International Conference on Software Testing, Verification, and Validation.

[29]  J. Herbsleb,et al.  Two case studies of open source software development: Apache and Mozilla , 2002, TSEM.

[30]  Alex Groce,et al.  Code coverage for suite evaluation by developers , 2014, ICSE.

[31]  Daniela Cruzes,et al.  The evolution and impact of code smells: A case study of two open source systems , 2009, 2009 3rd International Symposium on Empirical Software Engineering and Measurement.

[32]  Robert W. Bowdidge,et al.  Why don't software developers use static analysis tools to find bugs? , 2013, 2013 35th International Conference on Software Engineering (ICSE).

[33]  Andrew P. Black,et al.  An interactive ambient visualization for code smells , 2010, SOFTVIS '10.

[34]  Philip M. Johnson,et al.  Generalizing fault contents from a few classes , 2007, ESEM 2007.

[35]  R. Marticorena,et al.  Extending a Taxonomy of Bad Code Smells with Metrics , 2006 .

[36]  Forrest Shull,et al.  Investigating the impact of design debt on software quality , 2011, MTD '11.

[37]  Tom Mens,et al.  Does God Class Decomposition Affect Comprehensibility? , 2006, IASTED Conf. on Software Engineering.

[38]  Arthur J. Riel,et al.  Object-Oriented Design Heuristics , 1996 .

[39]  Aiko Fallas Yamashita,et al.  Do developers care about code smells? An exploratory survey , 2013, 2013 20th Working Conference on Reverse Engineering (WCRE).

[40]  Ioannis Stamelos,et al.  A controlled experiment investigation of an object-oriented design heuristic for maintainability , 2004, J. Syst. Softw..

[41]  Dag I. K. Sjøberg,et al.  Evaluating the effect of a delegated versus centralized control style on the maintainability of object-oriented software , 2004, IEEE Transactions on Software Engineering.

[42]  Gabriele Bavota,et al.  Detecting bad smells in source code using change history information , 2013, 2013 28th IEEE/ACM International Conference on Automated Software Engineering (ASE).

[43]  Daniela Cruzes,et al.  Are all code smells harmful? A study of God Classes and Brain Classes in the evolution of three open source systems , 2010, 2010 IEEE International Conference on Software Maintenance.

[44]  M. Mäntylä Mäntylä Bad Smells in Software – a Taxonomy and an Empirical Study , 2006 .

[45]  Robert J. Winter Cpt Agile Software Development: Principles, Patterns, and Practices , 2014 .