The impact of human factors on the participation decision of reviewers in modern code review
暂无分享,去创建一个
Kenichi Matsumoto | Akinori Ihara | Patanamon Thongtanunam | Shade Ruangwan | Ken-ichi Matsumoto | Akinori Ihara | Patanamon Thongtanunam | Shade Ruangwan
[1] A. Hassan,et al. Management of community contributions A case study on the Android and Linux software ecosystems , 2013 .
[2] MenardiGiovanna,et al. Training and assessing classification rules with imbalanced data , 2014 .
[3] E. James Whitehead,et al. Collaboration in Software Engineering: A Roadmap , 2007, Future of Software Engineering (FOSE '07).
[4] Daniel B. Carr,et al. Scatterplot matrix techniques for large N , 1986 .
[5] Gabriele Bavota,et al. Four eyes are better than two: On the impact of code reviews on software quality , 2015, 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME).
[6] Gregorio Robles,et al. Reviewing Career Paths of the OpenStack Developers , 2017, 2017 IEEE International Conference on Software Maintenance and Evolution (ICSME).
[7] Michael Fagan. Design and Code Inspections to Reduce Errors in Program Development , 1976, IBM Syst. J..
[8] Margaret-Anne D. Storey,et al. Understanding broadcast based peer review on open source software projects , 2011, 2011 33rd International Conference on Software Engineering (ICSE).
[9] Arie van Deursen,et al. Communication in open source software development mailing lists , 2013, 2013 10th Working Conference on Mining Software Repositories (MSR).
[10] Michael E. Fagan. Design and Code Inspections to Reduce Errors in Program Development , 1976, IBM Syst. J..
[11] Alberto Bacchelli,et al. Expectations, outcomes, and challenges of modern code review , 2013, 2013 35th International Conference on Software Engineering (ICSE).
[12] Frank E. Harrell,et al. Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis , 2001 .
[13] Foutse Khomh,et al. Broadcast vs. Unicast Review Technology: Does It Matter? , 2017, 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST).
[14] Hajimu Iida,et al. Who does what during a code review? Datasets of OSS peer review repositories , 2013, 2013 10th Working Conference on Mining Software Repositories (MSR).
[15] Hajimu Iida,et al. Revisiting Code Ownership and Its Relationship with Software Quality in the Scope of Modern Code Review , 2016, 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE).
[16] G. Hardin,et al. The Tragedy of the Commons , 1968, Green Planet Blues.
[17] Hajimu Iida,et al. Peer Review Social Network (PeRSoN) in Open Source Projects , 2016, IEICE Trans. Inf. Syst..
[18] Catherine Dehon,et al. Influence functions of the Spearman and Kendall correlation measures , 2010, Stat. Methods Appl..
[19] Jeffrey C. Carver,et al. Understanding the Impressions, Motivations, and Barriers of One Time Code Contributors to FLOSS Projects: A Survey , 2017, 2017 IEEE/ACM 39th International Conference on Software Engineering (ICSE).
[20] Adam Kolawa,et al. Automated Defect Prevention , 2007 .
[21] Charlotte H. Mason,et al. Collinearity, power, and interpretation of multiple regression analysis. , 1991 .
[22] Hajimu Iida,et al. Mining the Modern Code Review Repositories: A Dataset of People, Process and Product , 2016, 2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR).
[23] S. Siegel,et al. Nonparametric Statistics for the Behavioral Sciences , 2022, The SAGE Encyclopedia of Research Design.
[24] Chen Zhang,et al. Emergence of New Project Teams from Open Source Software Developer Networks: Impact of Prior Collaboration Ties , 2008, Inf. Syst. Res..
[25] Michael W. Godfrey,et al. Code Review Quality: How Developers See It , 2016, 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE).
[26] Ahmed E. Hassan,et al. The Impact of Class Rebalancing Techniques on the Performance and Interpretation of Defect Prediction Models , 2018, IEEE Transactions on Software Engineering.
[27] Ken-ichi Matsumoto,et al. Comments on “Researcher Bias: The Use of Machine Learning in Software Defect Prediction” , 2016, IEEE Transactions on Software Engineering.
[28] Hajimu Iida,et al. Review participation in modern code review , 2017, Empirical Software Engineering.
[29] Andreas Zeller,et al. Mining Version Histories to Guide Software Changes , 2004 .
[30] Marco Aurélio Gerosa,et al. Social Barriers Faced by Newcomers Placing Their First Contribution in Open Source Software Projects , 2015, CSCW.
[31] T. Mens,et al. Evidence for the Pareto principle in Open Source Software Activity , 2011 .
[32] Roger Newson,et al. Parameters behind “Nonparametric” Statistics: Kendall's tau, Somers’ D and Median Differences , 2002 .
[33] Ahmed E. Hassan,et al. An Experience Report on Defect Modelling in Practice: Pitfalls and Challenges , 2017, 2018 IEEE/ACM 40th International Conference on Software Engineering: Software Engineering in Practice Track (ICSE-SEIP).
[34] StoreyMargaret-Anne,et al. Peer Review on Open-Source Software Projects , 2014 .
[35] Bertrand Meyer. Design and code reviews in the age of the internet , 2008, CACM.
[36] Jeffrey C. Carver,et al. Impact of developer reputation on code review outcomes in OSS projects: an empirical investigation , 2014, ESEM '14.
[37] Shane McIntosh,et al. An empirical study of the impact of modern code review practices on software quality , 2015, Empirical Software Engineering.
[38] Gary Mcgraw. Software security , 2004, IEEE Security & Privacy Magazine.
[39] Andy Zaidman,et al. Modern code reviews in open-source projects: which problems do they fix? , 2014, MSR 2014.
[40] Michael W. Godfrey,et al. The influence of non-technical factors on code review , 2013, 2013 20th Working Conference on Reverse Engineering (WCRE).
[41] Daniel M. Germán,et al. Management of community contributions , 2013, Empirical Software Engineering.
[42] David A. Wagner,et al. An Empirical Study on the Effectiveness of Security Code Review , 2013, ESSoS.
[43] David Lo,et al. Who should review this change?: Putting text and file location analyses together for more accurate recommendations , 2015, 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME).
[44] C. Spearman. The proof and measurement of association between two things. By C. Spearman, 1904. , 1987, The American journal of psychology.
[45] Aurora Vizcaíno,et al. Collaboration Tools for Global Software Engineering , 2010, IEEE Software.
[46] Christian Bird,et al. Automatically Recommending Peer Reviewers in Modern Code Review , 2016, IEEE Transactions on Software Engineering.
[47] Ahmed E. Hassan,et al. Studying the use of developer IRC meetings in open source projects , 2009, 2009 IEEE International Conference on Software Maintenance.
[48] Adam Kolawa,et al. Automated Defect Prevention , 2007 .
[49] Hajimu Iida,et al. Investigating Code Review Practices in Defective Files: An Empirical Study of the Qt System , 2015, 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories.
[50] D. Hinkle,et al. Applied statistics for the behavioral sciences , 1979 .
[51] Vipin Balachandran,et al. Reducing human effort and improving quality in peer code reviews using automatic static analysis and reviewer recommendation , 2013, 2013 35th International Conference on Software Engineering (ICSE).
[52] Michael E. Fagan. Advances in software inspections , 1986, IEEE Transactions on Software Engineering.
[53] Margaret-Anne Storey,et al. – Lessons and Recommendations for Closed Source , 2012 .
[54] Shane McIntosh,et al. An Empirical Comparison of Model Validation Techniques for Defect Prediction Models , 2017, IEEE Transactions on Software Engineering.
[55] Osamu Mizuno,et al. Analyzing Involvements of Reviewers through Mining a Code Review Repository , 2011, 2011 Joint Conference of the 21st International Workshop on Software Measurement and the 6th International Conference on Software Process and Product Measurement.
[56] Tom Fawcett,et al. An introduction to ROC analysis , 2006, Pattern Recognit. Lett..
[57] Trevor Hastie,et al. Estimating the Error Rate of a Prediction Rule: Improvement on Cross-Validation , 2008 .
[58] Gang Yin,et al. Reviewer Recommender of Pull-Requests in GitHub , 2014, 2014 IEEE International Conference on Software Maintenance and Evolution.
[59] N. Cliff. Dominance statistics: Ordinal analyses to answer ordinal questions. , 1993 .
[60] Harald C. Gall,et al. Don't touch my code!: examining the effects of ownership on software quality , 2011, ESEC/FSE '11.
[61] L. Freeman. Centrality in social networks conceptual clarification , 1978 .
[62] Daniel M. German,et al. Open source software peer review practices , 2008, 2008 ACM/IEEE 30th International Conference on Software Engineering.
[63] N. Cliff. Answering Ordinal Questions with Ordinal Data Using Ordinal Statistics. , 1996, Multivariate behavioral research.
[64] Michael W. Godfrey,et al. Investigating code review quality: Do people and participation matter? , 2015, 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME).
[65] Shane McIntosh,et al. The impact of code review coverage and code review participation on software quality: a case study of the qt, VTK, and ITK projects , 2014, MSR 2014.
[66] Ken-ichi Matsumoto,et al. The Impact of Mislabelling on the Performance and Interpretation of Defect Prediction Models , 2015, 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering.
[67] EbertChristof,et al. Collaboration Tools for Global Software Engineering , 2010 .
[68] G. Brier. VERIFICATION OF FORECASTS EXPRESSED IN TERMS OF PROBABILITY , 1950 .
[69] Karim O. Elish,et al. Predicting defect-prone software modules using support vector machines , 2008, J. Syst. Softw..
[70] J. Hanley,et al. The meaning and use of the area under a receiver operating characteristic (ROC) curve. , 1982, Radiology.
[71] Daniel M. Germán,et al. Contemporary Peer Review in Action: Lessons from Open Source Development , 2012, IEEE Software.
[72] Hajimu Iida,et al. Who should review my code? A file location-based code-reviewer recommendation approach for Modern Code Review , 2015, 2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER).
[73] A. Frank Ackerman,et al. Software inspections: an effective verification process , 1989, IEEE Software.
[74] B. Efron. Estimating the Error Rate of a Prediction Rule: Improvement on Cross-Validation , 1983 .
[75] Patrick Dattalo,et al. Statistical Power Analysis , 2008 .
[76] Daniel M. Germán,et al. Peer Review on Open-Source Software Projects: Parameters, Statistical Models, and Theory , 2014, TSEM.
[77] Ashish Sureka,et al. Mining Peer Code Review System for Computing Effort and Contribution Metrics for Patch Reviewers , 2014, 2014 IEEE 4th Workshop on Mining Unstructured Data.
[78] Premkumar T. Devanbu,et al. How social Q&A sites are changing knowledge sharing in open source software communities , 2014, CSCW.
[79] Hajimu Iida,et al. Participation in Modern Code Review An Empirical Study of the Android , Qt , and OpenStack Projects , 2016 .
[80] Ying Zou,et al. An Industrial Case Study on the Automated Detection of Performance Regressions in Heterogeneous Environments , 2015, 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering.
[81] Nicola Torelli,et al. Training and assessing classification rules with imbalanced data , 2012, Data Mining and Knowledge Discovery.
[82] Christian Bird,et al. Gerrit software code review data from Android , 2013, 2013 10th Working Conference on Mining Software Repositories (MSR).