A Survey of Software Code Review Practices in Brazil

Context: Software code review aims to early find code anomalies and to perform code improvements when they are less expensive. However, issues and challenges faced by developers who do not apply code review practices regularly are unclear. Goal: Investigate difficulties developers face to apply code review practices without limiting the target audience to developers who already use this practice regularly. Method: We conducted a web-based survey with 350 Brazilian practitioners engaged on the software development industry. Results: Code review practices are widespread among Brazilian practitioners who recognize its importance. However, there is no routine for applying these practices. In addition, they report difficulties to fit static analysis tools in the software development process. One possible reason recognized by practitioners is that most of these tools use a single metric threshold, which might be not adequate to evaluate all system classes. Conclusion: Improving guidelines to fit code review practices into the software development process could help to make them widely used. Additionally, future studies should investigate whether multiple metric thresholds that take source code context into account reduce static analysis tool false alarms. Finally, these tools should allow their use in distinct phases of the software development process.

[1]  Daniela Cruzes,et al.  Are all code smells harmful? A study of God Classes and Brain Classes in the evolution of three open source systems , 2010, 2010 IEEE International Conference on Software Maintenance.

[2]  Andy Zaidman,et al.  Analyzing the State of Static Analysis: A Large-Scale Evaluation in Open Source Software , 2016, 2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER).

[3]  Christian Bird,et al.  Code Reviewing in the Trenches: Challenges and Best Practices , 2018, IEEE Software.

[4]  Dawson R. Engler,et al.  A few billion lines of code later , 2010, Commun. ACM.

[5]  Foutse Khomh,et al.  Do Code Smells Impact the Effort of Different Maintenance Programming Activities? , 2016, 2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER).

[6]  Lefteris Angelis,et al.  Links between the personalities, views and attitudes of software engineers , 2010, Inf. Softw. Technol..

[7]  Mark Kasunic,et al.  Designing an Effective Survey , 2005 .

[8]  Adam Kolawa,et al.  Automated Defect Prevention , 2007 .

[9]  S. C. Day EXPECTATIONS , 1983, The Lancet.

[10]  Michael W. Godfrey,et al.  Investigating code review quality: Do people and participation matter? , 2015, 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME).

[11]  Giuliano Antoniol,et al.  Would static analysis tools help developers with code reviews? , 2015, 2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER).

[12]  Christian Bird,et al.  Convergent contemporary software peer review practices , 2013, ESEC/FSE 2013.

[13]  Foutse Khomh,et al.  BDTEX: A GQM-based Bayesian approach for the detection of antipatterns , 2011, J. Syst. Softw..

[14]  Benjamin Livshits,et al.  Just-in-time static analysis , 2016, ISSTA.

[15]  Alberto Bacchelli,et al.  Expectations, outcomes, and challenges of modern code review , 2013, 2013 35th International Conference on Software Engineering (ICSE).

[16]  Claes Wohlin,et al.  Experimentation in Software Engineering , 2000, The Kluwer International Series in Software Engineering.

[17]  Audris Mockus,et al.  How Does Context Affect the Distribution of Software Maintainability Metrics? , 2013, 2013 IEEE International Conference on Software Maintenance.

[18]  Vipin Balachandran,et al.  Reducing human effort and improving quality in peer code reviews using automatic static analysis and reviewer recommendation , 2013, 2013 35th International Conference on Software Engineering (ICSE).

[19]  Marcus Ciolkowski,et al.  Conducting on-line surveys in software engineering , 2003, 2003 International Symposium on Empirical Software Engineering, 2003. ISESE 2003. Proceedings..

[20]  Gabriele Bavota,et al.  Do They Really Smell Bad? A Study on Developers' Perception of Bad Code Smells , 2014, 2014 IEEE International Conference on Software Maintenance and Evolution.

[21]  Gregory W. Corder,et al.  Nonparametric Statistics : A Step-by-Step Approach , 2014 .

[22]  Laurie A. Williams,et al.  On the value of static analysis for fault detection in software , 2006, IEEE Transactions on Software Engineering.

[23]  Jeffrey C. Carver,et al.  Process Aspects and Social Dynamics of Contemporary Code Review: Insights from Open Source Development and Industrial Practice at Microsoft , 2017, IEEE Transactions on Software Engineering.

[24]  Sebastian G. Elbaum,et al.  Predicting accurate and actionable static analysis warnings , 2008, 2008 ACM/IEEE 30th International Conference on Software Engineering.

[25]  Baldoino Fonseca dos Santos Neto,et al.  Are you smelling it? Investigating how similar developers detect code smells , 2018, Inf. Softw. Technol..

[26]  Cláudio Sant'Anna,et al.  How Do Design Decisions Affect the Distribution of Software Metrics? , 2018, 2018 IEEE/ACM 26th International Conference on Program Comprehension (ICPC).

[27]  Roberto da Silva Bigonha,et al.  Identifying thresholds for object-oriented software metrics , 2012, J. Syst. Softw..

[28]  Michael E. Fagan Design and Code Inspections to Reduce Errors in Program Development , 1976, IBM Syst. J..

[29]  Paulo Cézar Stadzisz,et al.  A Brazilian survey on UML and model-driven practices for embedded software development , 2013, J. Syst. Softw..

[30]  Shari Lawrence Pfleeger,et al.  Principles of survey research part 2: designing a survey , 2002, SOEN.

[31]  N. Cliff Dominance statistics: Ordinal analyses to answer ordinal questions. , 1993 .

[32]  Carolyn B. Seaman,et al.  Qualitative Methods in Empirical Studies of Software Engineering , 1999, IEEE Trans. Software Eng..

[33]  Michael W. Godfrey,et al.  Code Review Quality: How Developers See It , 2016, 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE).

[34]  Houari A. Sahraoui,et al.  A Cooperative Parallel Search-Based Software Engineering Approach for Code-Smells Detection , 2014, IEEE Transactions on Software Engineering.

[35]  Shinji Kusumoto,et al.  Improvement of Software Process by Process Description and Benefit Estimation , 1995, 1995 17th International Conference on Software Engineering.

[36]  David Hovemeyer,et al.  Using Static Analysis to Find Bugs , 2008, IEEE Software.

[37]  Alessandro F. Garcia,et al.  Automatically detecting architecturally-relevant code anomalies , 2012, 2012 Third International Workshop on Recommendation Systems for Software Engineering (RSSE).

[38]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[39]  Lucas Layman,et al.  Toward Reducing Fault Fix Time: Understanding Developer Behavior for the Design of Automated Fault Detection Tools , 2007, First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007).

[40]  Jianhua Zhao,et al.  Supporting Automatic Code Review via Design , 2013, 2013 IEEE Seventh International Conference on Software Security and Reliability Companion.

[41]  Pierre N. Robillard,et al.  Why Good Developers Write Bad Code: An Observational Case Study of the Impacts of Organizational Factors on Software Quality , 2015, 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering.

[42]  William Pugh,et al.  The Google FindBugs fixit , 2010, ISSTA '10.

[43]  Mauricio Finavaro Aniche,et al.  SATT: Tailoring Code Metric Thresholds for Different Software Architectures , 2016, SCAM.

[44]  Audris Mockus,et al.  Quantifying the Effect of Code Smells on Maintenance Effort , 2013, IEEE Transactions on Software Engineering.

[45]  Tiago L. Alves,et al.  Deriving metric thresholds from benchmark data , 2010, 2010 IEEE International Conference on Software Maintenance.

[46]  Radu Marinescu,et al.  Detection strategies: metrics-based rules for detecting design flaws , 2004, 20th IEEE International Conference on Software Maintenance, 2004. Proceedings..

[47]  Marco Tulio Valente,et al.  Extracting relative thresholds for source code metrics , 2014, 2014 Software Evolution Week - IEEE Conference on Software Maintenance, Reengineering, and Reverse Engineering (CSMR-WCRE).

[48]  Daniel M. German,et al.  Open source software peer review practices , 2008, 2008 ACM/IEEE 30th International Conference on Software Engineering.

[49]  Francesca Arcelli Fontana,et al.  Automatic Metric Thresholds Derivation for Code Smell Detection , 2015, 2015 IEEE/ACM 6th International Workshop on Emerging Trends in Software Metrics.

[50]  Robert W. Bowdidge,et al.  Why don't software developers use static analysis tools to find bugs? , 2013, 2013 35th International Conference on Software Engineering (ICSE).

[51]  Emerson R. Murphy-Hill,et al.  Improving developer participation rates in surveys , 2013, 2013 6th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE).

[52]  Aiko Yamashita How Good Are Code Smells for Evaluating Software Maintainability? Results from a Comparative Case Study , 2013, 2013 IEEE International Conference on Software Maintenance.

[53]  Ciera Jaspan,et al.  Tricorder: Building a Program Analysis Ecosystem , 2015, 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering.

[54]  Harald C. Gall,et al.  Context is king: The developer perspective on the usage of static analysis tools , 2018, 2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER).

[55]  Shane McIntosh,et al.  An empirical study of the impact of modern code review practices on software quality , 2015, Empirical Software Engineering.

[56]  Sandro Morasca,et al.  An Empirical Evaluation of Distribution-based Thresholds for Internal Software Measures , 2016, PROMISE.

[57]  Pearl Brereton,et al.  Robust Statistical Methods for Empirical Software Engineering , 2017, Empirical Software Engineering.

[58]  Diomidis Spinellis,et al.  A survey on software smells , 2018, J. Syst. Softw..

[59]  Mohammad Ghafari,et al.  JIT Feedback - What Experienced Developers Like about Static Analysis , 2018, 2018 IEEE/ACM 26th International Conference on Program Comprehension (ICPC).

[60]  Christian Bird,et al.  What developers want and need from program analysis: An empirical study , 2016, 2016 31st IEEE/ACM International Conference on Automated Software Engineering (ASE).

[61]  S. Jamieson Likert scales: how to (ab)use them , 2004, Medical education.

[62]  Stéphane Ducasse,et al.  Object-Oriented Metrics in Practice , 2005 .

[63]  Jeffrey C. Carver,et al.  Impact of Peer Code Review on Peer Impression Formation: A Survey , 2013, 2013 ACM / IEEE International Symposium on Empirical Software Engineering and Measurement.

[64]  Rahul Kumar,et al.  The economics of static analysis tools , 2013, ESEC/FSE 2013.

[65]  Michael D. Ernst,et al.  Which warnings should I fix first? , 2007, ESEC-FSE '07.