A controlled experiment on software clones

Most software systems contain sections of duplicated source code - clones - that are believed to make maintenance more difficult. Recent studies tested this assumption by retrospective analyses of software archives. While giving important insights, the analysis of historical data relies only on snapshots and misses the human interaction in between. We conducted a controlled experiment to investigate how clones affect the programmer's performance in common bug-fixing tasks. While our results do not exhibit a decisive difference in the time needed to correct cloned bugs, we observed many cases in which cloned bugs were not corrected completely.

[1]  Manishankar Mondal,et al.  Comparative stability of cloned and non-cloned code: an empirical study , 2012, SAC '12.

[2]  Jeffrey C. Carver,et al.  On the need for human-based empirical validation of techniques and tools for code clone analysis , 2011, IWSC '11.

[3]  Lerina Aversano,et al.  How Clones are Maintained: An Empirical Study , 2007, 11th European Conference on Software Maintenance and Reengineering (CSMR'07).

[4]  Yuanyuan Zhou,et al.  CP-Miner: finding copy-paste and related bugs in large-scale software code , 2006, IEEE Transactions on Software Engineering.

[5]  Shinji Kusumoto,et al.  Is duplicate code more frequently modified than non-duplicate code in software evolution?: an empirical study on open source software , 2010, IWPSE-EVOL '10.

[6]  Premkumar T. Devanbu,et al.  Clones: What is that smell? , 2010, MSR.

[7]  Nils Göde,et al.  Evolution of Type-1 Clones , 2009, 2009 Ninth IEEE International Working Conference on Source Code Analysis and Manipulation.

[8]  Rainer Koschke,et al.  Frequency and risks of changes to clones , 2011, 2011 33rd International Conference on Software Engineering (ICSE).

[9]  Miryung Kim,et al.  An ethnographic study of copy and paste programming practices in OOPL , 2004, Proceedings. 2004 International Symposium on Empirical Software Engineering, 2004. ISESE '04..

[10]  Dietmar Pfahl,et al.  Reporting guidelines for controlled experiments in software engineering , 2005, 2005 International Symposium on Empirical Software Engineering, 2005..

[11]  Hagen Hagen Is Cloned Code more stable than Non-Cloned Code? , 2008 .

[12]  Elmar Jürgens,et al.  Do code clones matter? , 2009, 2009 IEEE 31st International Conference on Software Engineering.

[13]  Nils Göde,et al.  Cloned code: stable code , 2013, J. Softw. Evol. Process..

[14]  Daqing Hou,et al.  Aiding Software Maintenance with Copy-and-Paste Clone-Awareness , 2010, 2010 IEEE 18th International Conference on Program Comprehension.

[15]  Ying Zou,et al.  An Empirical Study on Inconsistent Changes to Code Clones at Release Level , 2009, 2009 16th Working Conference on Reverse Engineering.

[16]  Jeffrey C. Carver,et al.  Measuring the Efficacy of Code Clone Information in a Bug Localization Task: An Empirical Study , 2011, 2011 International Symposium on Empirical Software Engineering and Measurement.

[17]  Arie van Deursen,et al.  Managing code clones using dynamic change tracking and resolution , 2009, 2009 IEEE International Conference on Software Maintenance.

[18]  Jens Krinke,et al.  A Study of Consistent and Inconsistent Changes to Code Clones , 2007, 14th Working Conference on Reverse Engineering (WCRE 2007).

[19]  Richard Wettel,et al.  Software Systems as Cities , 2010 .

[20]  Tibor Gyimóthy,et al.  Clone Smells in Software Evolution , 2007, 2007 IEEE International Conference on Software Maintenance.

[21]  Jochen Quante,et al.  Do Dynamic Object Process Graphs Support Program Understanding? - A Controlled Experiment. , 2008, 2008 16th IEEE International Conference on Program Comprehension.

[22]  Akito Monden,et al.  Software quality analysis by code clones in industrial legacy software , 2002, Proceedings Eighth IEEE Symposium on Software Metrics.

[23]  Jeffrey C. Carver,et al.  Measuring the efficacy of code clone information: an empirical study , 2010, PLATEAU '10.