Broadcast vs. Unicast Review Technology: Does It Matter?

Code review is the process of having other team members examine changes to a software system in order to evaluate their technical content and quality. Over the years, multiple tools have been proposed to help software developers conduct and manage code reviews. Some software organizations have been migrating from broadcast review technology to a more advanced unicast review approach such as Jira, but it is unclear if these unicast review technology leads to better code reviews. This paper empirically studies review data of five Apache projects that switched from broadcast based code review to unicast based, to understand the impact of review technology on review effectiveness and quality. Results suggest that broadcast based review is twice faster than review done with unicast based review technology. However, unicast's review quality seems to be better than that of the broadcast based. Our findings suggest that the medium (i.e., broadcast or unicast) technology used for code reviews can relate to the effectiveness and quality of reviews activities.

[1]  Ahmed E. Hassan,et al.  What Can OSS Mailing Lists Tell Us? A Preliminary Psychometric Text Analysis of the Apache Developer Mailing List , 2007, Fourth International Workshop on Mining Software Repositories (MSR'07:ICSE Workshops 2007).

[2]  R. Ledesma,et al.  Cliff's Delta Calculator: A non-parametric effect size program for two groups of observations , 2010 .

[3]  Arie van Deursen,et al.  Communication in open source software development mailing lists , 2013, 2013 10th Working Conference on Mining Software Repositories (MSR).

[4]  Alberto Bacchelli,et al.  Expectations, outcomes, and challenges of modern code review , 2013, 2013 35th International Conference on Software Engineering (ICSE).

[5]  Hajimu Iida,et al.  ReDA: A Web-Based Visualization Tool for Analyzing Modern Code Review Dataset , 2014, 2014 IEEE International Conference on Software Maintenance and Evolution.

[6]  Mark C. Paulk,et al.  The Impact of Design and Code Reviews on Software Quality: An Empirical Study Based on PSP Data , 2009, IEEE Transactions on Software Engineering.

[7]  Daniel M. Germán,et al.  Tracing back the history of commits in low-tech reviewing environments: a case study of the Linux kernel , 2014, ESEM '14.

[8]  Ahmed E. Hassan,et al.  An empirical study on the risks of using off-the-shelf techniques for processing mailing list data , 2009, 2009 IEEE International Conference on Software Maintenance.

[9]  David Notkin,et al.  Proceedings of the 43rd International Conference on Software Engineering , 2013, ICSE 2013.

[10]  Michael W. Godfrey,et al.  Investigating code review quality: Do people and participation matter? , 2015, 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME).

[11]  Christine Nadel,et al.  Case Study Research Design And Methods , 2016 .

[12]  Xiao Zhang,et al.  Quality Assurance of Peer Code Review Process: A Web-Based MIS , 2008, 2008 International Conference on Computer Science and Software Engineering.

[13]  Megan Squire "Should We Move to Stack Overflow?" Measuring the Utility of Social Media for Developer Support , 2015, 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering.

[14]  Gabriele Bavota,et al.  Four eyes are better than two: On the impact of code reviews on software quality , 2015, 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME).

[15]  Jaime Spacco,et al.  SZZ revisited: verifying when changes induce fixes , 2008, DEFECTS '08.

[16]  Vipin Balachandran,et al.  Reducing human effort and improving quality in peer code reviews using automatic static analysis and reviewer recommendation , 2013, 2013 35th International Conference on Software Engineering (ICSE).

[17]  Foutse Khomh,et al.  Do code review practices impact design quality? A case study of the Qt, VTK, and ITK projects , 2015, 2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER).

[18]  Michael E. Fagan Design and Code Inspections to Reduce Errors in Program Development , 1976, IBM Syst. J..

[19]  M. Mukaka,et al.  Statistics corner: A guide to appropriate use of correlation coefficient in medical research. , 2012, Malawi medical journal : the journal of Medical Association of Malawi.

[20]  Christian Bird,et al.  Convergent contemporary software peer review practices , 2013, ESEC/FSE 2013.

[21]  Daniel M. Germán,et al.  Contemporary Peer Review in Action: Lessons from Open Source Development , 2012, IEEE Software.

[22]  Bhuricha Deen Sethanandha Improving open source software patch contribution process: methods and tools , 2011, 2011 33rd International Conference on Software Engineering (ICSE).

[23]  Andy Zaidman,et al.  Modern code reviews in open-source projects: which problems do they fix? , 2014, MSR 2014.

[24]  Michael W. Godfrey,et al.  The influence of non-technical factors on code review , 2013, 2013 20th Working Conference on Reverse Engineering (WCRE).

[25]  Daniel M. Germán,et al.  Management of community contributions , 2013, Empirical Software Engineering.

[26]  Jonathan Sillito,et al.  Why are software projects moving from centralized to decentralized version control systems? , 2009, 2009 ICSE Workshop on Cooperative and Human Aspects on Software Engineering.

[27]  Fevzi Belli,et al.  Empirical performance analysis of computer-supported code-reviews , 1997, Proceedings The Eighth International Symposium on Software Reliability Engineering.

[28]  Thomas Zimmermann,et al.  Extracting structural information from bug reports , 2008, MSR '08.

[29]  Vipin Balachandran,et al.  Fix-it: An extensible code auto-fix component in Review Bot , 2013, 2013 IEEE 13th International Working Conference on Source Code Analysis and Manipulation (SCAM).

[30]  Premkumar T. Devanbu,et al.  Ecological inference in empirical software engineering , 2011, 2011 26th IEEE/ACM International Conference on Automated Software Engineering (ASE 2011).

[31]  Bertrand Meyer Design and Code Reviews in the Age of the Internet , 2008, SEAFOOD.

[32]  Yuriy Tymchuk,et al.  Treating software quality as a first-class entity , 2015, 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME).

[33]  Daniel M. German,et al.  Open source software peer review practices , 2008, 2008 ACM/IEEE 30th International Conference on Software Engineering.

[34]  Wang Yan-qing,et al.  Quality Assurance of Peer Code Review Process: A Web-Based MIS , 2008, CSSE 2008.

[35]  Christian Bird,et al.  Characteristics of Useful Code Reviews: An Empirical Study at Microsoft , 2015, 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories.

[36]  Hajimu Iida,et al.  Investigating Code Review Practices in Defective Files: An Empirical Study of the Qt System , 2015, 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories.

[37]  Andrew Meneely,et al.  When a Patch Goes Bad: Exploring the Properties of Vulnerability-Contributing Commits , 2013, 2013 ACM / IEEE International Symposium on Empirical Software Engineering and Measurement.

[38]  James R. Lyle,et al.  A Two-Person Inspection Method to Improve Prog ramming Productivity , 1989, IEEE Transactions on Software Engineering.

[39]  Premkumar T. Devanbu,et al.  Will They Like This? Evaluating Code Contributions with Language Models , 2015, 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories.