"Was My Contribution Fairly Reviewed?" A Framework to Study the Perception of Fairness in Modern Code Reviews

Modern code reviews improve the quality of software products. Although modern code reviews rely heavily on human interactions, little is known regarding whether they are performed fairly. Fairness plays a role in any process where decisions that affect others are made. When a system is perceived to be unfair, it affects negatively the productivity and motivation of its participants. In this paper, using fairness theory we create a framework that describes how fairness affects modern code reviews. To demonstrate its applicability, and the importance of fairness in code reviews, we conducted an em-pirical study that asked developers of a large industrial open source ecosystem (OpenStack) what their perceptions are regarding fairness in their code reviewing process. Our study shows that, in general, the code review process in OpenStack is perceived as fair; however, a significant portion of respondents perceive it as unfair. We also show that the variability in the way they prioritize code reviews signals a lack of consistency and the existence of bias (potentially increasing the perception of unfairness). The contributions of this paper are: (1) we propose a framework—based on fairness theory—for studying and managing social behaviour in modern code reviews, (2) we provide support for the framework through the results of a case study on a large industrial-backed open source project, (3) we present evidence that fairness is an issue in the code review process of a large open source ecosystem, and, (4) we present a set of guidelines for practitioners to address unfairness in modern code reviews.

[1]  Alexander Serebrenik,et al.  Security and emotion: sentiment analysis of security discussions on GitHub , 2014, MSR 2014.

[2]  Timothy C. Lethbridge,et al.  Software Engineering Data Collection for Field Studies , 2008, Guide to Advanced Empirical Software Engineering.

[3]  Uzma Raja,et al.  Defining and Evaluating a Measure of Open Source Project Survivability , 2012, IEEE Transactions on Software Engineering.

[4]  Christian Bird,et al.  Convergent contemporary software peer review practices , 2013, ESEC/FSE 2013.

[5]  Marco Aurélio Gerosa,et al.  A systematic literature review on the barriers faced by newcomers to open source software projects , 2015, Inf. Softw. Technol..

[6]  Gabriele Bavota,et al.  Four eyes are better than two: On the impact of code reviews on software quality , 2015, 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME).

[7]  Claes Wohlin,et al.  Experimentation in Software Engineering , 2000, The Kluwer International Series in Software Engineering.

[8]  Vipin Balachandran,et al.  Reducing human effort and improving quality in peer code reviews using automatic static analysis and reviewer recommendation , 2013, 2013 35th International Conference on Software Engineering (ICSE).

[9]  Bram Adams,et al.  The Impact of Human Discussions on Just-in-Time Quality Assurance: An Empirical Study on OpenStack and Eclipse , 2016, 2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER).

[10]  B. Flyvbjerg Five Misunderstandings About Case-Study Research , 2006, 1304.1186.

[11]  James D. Herbsleb,et al.  Social coding in GitHub: transparency and collaboration in an open software repository , 2012, CSCW.

[12]  Jacob Cohen,et al.  Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. , 1968 .

[13]  G. Leventhal What Should Be Done with Equity Theory , 1980 .

[14]  Alexander Serebrenik,et al.  Code of conduct in open source projects , 2017, 2017 IEEE 24th International Conference on Software Analysis, Evolution and Reengineering (SANER).

[15]  Alberto Bacchelli,et al.  Expectations, outcomes, and challenges of modern code review , 2013, 2013 35th International Conference on Software Engineering (ICSE).

[16]  A. Kuper,et al.  The Social Science Encyclopedia , 1986 .

[17]  Robert Folger,et al.  Effects of "Voice" and Peer Opinions on Responses to Inequity , 1979 .

[18]  Tom R. Tyler,et al.  Psychological models of the justice motive: Antecedents of distributive and procedural justice. , 1994 .

[19]  Austen Rainer,et al.  Case Study Research in Software Engineering - Guidelines and Examples , 2012 .

[20]  Carolyn B. Seaman,et al.  Qualitative Methods in Empirical Studies of Software Engineering , 1999, IEEE Trans. Software Eng..

[21]  Shane McIntosh,et al.  An empirical study of the impact of modern code review practices on software quality , 2015, Empirical Software Engineering.

[22]  Jeffrey C. Carver,et al.  Impact of developer reputation on code review outcomes in OSS projects: an empirical investigation , 2014, ESEM '14.

[23]  Jesús M. González-Barahona,et al.  Characterization of the Xen Project Code Review Process: an Experience Report , 2016, 2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR).

[24]  Cynthia D. Fisher,et al.  Mood and emotions while working: missing pieces of job satisfaction? , 2000 .

[25]  Forrest Shull,et al.  Building Knowledge through Families of Experiments , 1999, IEEE Trans. Software Eng..

[26]  H. Weiss,et al.  Affective Events Theory: A theoretical discussion of the structure, causes and consequences of affective experiences at work. , 1996 .

[27]  Daniel M. Germán,et al.  Will my patch make it? And how fast? Case study on the Linux kernel , 2013, 2013 10th Working Conference on Mining Software Repositories (MSR).

[28]  Daniel Schneider,et al.  Differentiating Communication Styles of Leaders on the Linux Kernel Mailing List , 2016, OpenSym.

[29]  Karim R. Lakhani,et al.  Community, Joining, and Specialization in Open Source Software Innovation: A Case Study , 2003 .

[30]  S. Grover,et al.  Unraveling respect in organization studies , 2014 .

[31]  Andrew Begel,et al.  Analyze this! 145 questions for data scientists in software engineering , 2013, ICSE.

[32]  R. Bies Interactional justice : communication criteria of fairness , 1986 .

[33]  Jesús M. González-Barahona,et al.  Using Metrics to Track Code Review Performance , 2017, EASE.

[34]  Hajimu Iida,et al.  Review participation in modern code review , 2017, Empirical Software Engineering.

[35]  Janice Singer,et al.  Studying Software Engineers: Data Collection Techniques for Software Field Studies , 2005, Empirical Software Engineering.

[36]  Samuel M. Butman,et al.  Closing a door , 2008, Catheterization and cardiovascular interventions : official journal of the Society for Cardiac Angiography & Interventions.

[37]  Donald E. Conlon,et al.  Justice at the millennium: a meta-analytic review of 25 years of organizational justice research. , 2001, The Journal of applied psychology.

[38]  P Wilson,et al.  Closing the door. , 2016, Spectrum.

[39]  J. Greenberg,et al.  A Taxonomy of Organizational Justice Theories , 1987 .

[40]  A. Edmondson Psychological Safety and Learning Behavior in Work Teams , 1999 .

[41]  Daniel M. Germán,et al.  Peer Review on Open-Source Software Projects: Parameters, Statistical Models, and Theory , 2014, TSEM.

[42]  J. Colquitt On the dimensionality of organizational justice: a construct validation of a measure. , 2001, The Journal of applied psychology.

[43]  Debra L. Shapiro,et al.  Interactional fairness judgments: The influence of causal accounts , 1987 .

[44]  Hajimu Iida,et al.  Who does what during a code review? Datasets of OSS peer review repositories , 2013, 2013 10th Working Conference on Mining Software Repositories (MSR).

[45]  Michael W. Godfrey,et al.  The influence of non-technical factors on code review , 2013, 2013 20th Working Conference on Reverse Engineering (WCRE).

[46]  Jing Wang,et al.  Comparative case studies of open source software peer review practices , 2015, Inf. Softw. Technol..

[47]  Marcus Ciolkowski,et al.  Conducting on-line surveys in software engineering , 2003, 2003 International Symposium on Empirical Software Engineering, 2003. ISESE 2003. Proceedings..

[48]  David Lo,et al.  Who should review this change?: Putting text and file location analyses together for more accurate recommendations , 2015, 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME).

[49]  Per Runeson,et al.  Guidelines for conducting and reporting case study research in software engineering , 2009, Empirical Software Engineering.

[50]  Nicole Novielli,et al.  Anger and Its Direction in Collaborative Software Development , 2017, 2017 IEEE/ACM 39th International Conference on Software Engineering: New Ideas and Emerging Technologies Results Track (ICSE-NIER).

[51]  Hajimu Iida,et al.  Revisiting Code Ownership and Its Relationship with Software Quality in the Scope of Modern Code Review , 2016, 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE).

[52]  Christian Bird,et al.  Automatically Recommending Peer Reviewers in Modern Code Review , 2016, IEEE Transactions on Software Engineering.

[53]  Dongmei Zhang,et al.  How do software engineers understand code changes?: an exploratory study in industry , 2012, SIGSOFT FSE.

[54]  Christian Bird,et al.  Characteristics of Useful Code Reviews: An Empirical Study at Microsoft , 2015, 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories.

[55]  Sandra Slaughter,et al.  All Are Not Equal: An Examination of the Economic Returns to Different Forms of Participation in Open Source Software Communities , 2013, Inf. Syst. Res..

[56]  Jerome M. Chertkoff,et al.  Explaining Injustice: The Interactive Effect of Explanation and Outcome on Fairness Perceptions and Task Motivation , 2002 .

[57]  Stuart S. Nagel,et al.  Procedural Justice: A Psychological Analysis , 1976 .

[58]  Bryan W. Husted,et al.  Fairness and Transaction Costs: The Contribution of Organizational Justice Theory to an Integrative Model of Economic Organization , 2004, Organ. Sci..

[59]  Robert Folger,et al.  Fairness theory: Justice as accountability. , 2001 .

[60]  Michael W. Godfrey,et al.  Investigating code review quality: Do people and participation matter? , 2015, 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME).

[61]  Jacek Czerwonka,et al.  Code Reviews Do Not Find Bugs. How the Current Code Review Best Practice Slows Us Down , 2015, 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering.

[62]  G. Leventhal,et al.  The Distribution of Rewards and Resources in Groups and Organizations , 1976 .

[63]  James D. Herbsleb,et al.  Let's talk about it: evaluating contributions through discussion in GitHub , 2014, SIGSOFT FSE.

[64]  Megan Squire,et al.  FLOSS as a Source for Profanity and Insults: Collecting the Data , 2015, 2015 48th Hawaii International Conference on System Sciences.

[65]  Ronald L. Cohen,et al.  Distributive justice: Theory and research , 1987 .

[66]  J. Thibaut,et al.  Procedural Justice: A Psychological Analysis , 1976 .

[67]  Michael W. Godfrey,et al.  Investigating technical and non-technical factors influencing modern code review , 2015, Empirical Software Engineering.

[68]  Hajimu Iida,et al.  Peer Review Social Network (PeRSoN) in Open Source Projects , 2016, IEICE Trans. Inf. Syst..

[69]  Jeffrey C. Carver,et al.  Understanding the Impressions, Motivations, and Barriers of One Time Code Contributors to FLOSS Projects: A Survey , 2017, 2017 IEEE/ACM 39th International Conference on Software Engineering (ICSE).

[70]  S. Lynham The General Method of Theory-Building Research in Applied Disciplines , 2002 .