A Survey of Quality Assurance Practices in Biomedical Open Source Software Projects

Background Open source (OS) software is continuously gaining recognition and use in the biomedical domain, for example, in health informatics and bioinformatics. Objectives Given the mission critical nature of applications in this domain and their potential impact on patient safety, it is important to understand to what degree and how effectively biomedical OS developers perform standard quality assurance (QA) activities such as peer reviews and testing. This would allow the users of biomedical OS software to better understand the quality risks, if any, and the developers to identify process improvement opportunities to produce higher quality software. Methods A survey of developers working on biomedical OS projects was conducted to examine the QA activities that are performed. We took a descriptive approach to summarize the implementation of QA activities and then examined some of the factors that may be related to the implementation of such practices. Results Our descriptive results show that 63% (95% CI, 54-72) of projects did not include peer reviews in their development process, while 82% (95% CI, 75-89) did include testing. Approximately 74% (95% CI, 67-81) of developers did not have a background in computing, 80% (95% CI, 74-87) were paid for their contributions to the project, and 52% (95% CI, 43-60) had PhDs. A multivariate logistic regression model to predict the implementation of peer reviews was not significant (likelihood ratio test = 16.86, 9 df, P = .051) and neither was a model to predict the implementation of testing (likelihood ratio test = 3.34, 9 df, P = .95). Conclusions Less attention is paid to peer review than testing. However, the former is a complementary, and necessary, QA practice rather than an alternative. Therefore, one can argue that there are quality risks, at least at this point in time, in transitioning biomedical OS software into any critical settings that may have operational, financial, or safety implications. Developers of biomedical OS applications should invest more effort in implementing systemic peer review practices throughout the development and maintenance processes.

[1]  Lionel C. Briand,et al.  Using simulation to build inspection efficiency benchmarks for development projects , 1998, Proceedings of the 20th International Conference on Software Engineering.

[2]  Pankaj Jalote,et al.  Overcoming the NAH syndrome for inspection deployment , 1998, Proceedings of the 20th International Conference on Software Engineering.

[3]  S. Siegel,et al.  Nonparametric Statistics for the Behavioral Sciences , 2022, The SAGE Encyclopedia of Research Design.

[4]  James M. Bieman,et al.  The FreeBSD project: a replication case study of open source development , 2005, IEEE Transactions on Software Engineering.

[5]  Sebastian G. Elbaum,et al.  A survey on quality related activities in open source , 2000, SOEN.

[6]  D HerbslebJames,et al.  Two case studies of open source software development , 2002 .

[7]  R BasiliVictor,et al.  Comparing the Effectiveness of Software Testing Strategies , 1987 .

[8]  Ita G. G. Kreft,et al.  Are Multilevel Techniques Necessary? An Attempt at Demystification. , 1994 .

[9]  Gareth S. Kantor,et al.  Letter to the Editor: Open-source Software and the Primary Care EMR , 2003, J. Am. Medical Informatics Assoc..

[10]  Philip M. Johnson Reengineering inspection , 1998, CACM.

[11]  David Hartzband,et al.  Open Source Software: A Primer for Health Care Leaders , 2006 .

[12]  Jean YH Yang,et al.  Bioconductor: open software development for computational biology and bioinformatics , 2004, Genome Biology.

[13]  Jason L. Dedrick,et al.  An exploratory study into open source platform adoption , 2004, 37th Annual Hawaii International Conference on System Sciences, 2004. Proceedings of the.

[14]  Akif Günes Koru,et al.  Defect handling in medium and large open source projects , 2004, IEEE Software.

[15]  Erik Kamsties,et al.  An Empirical Evaluation of Three Defect-Detection Techniques , 1995, ESEC.

[16]  Eric S. Raymond,et al.  The cathedral and the bazaar - musings on Linux and open source by an accidental revoltionary (rev. ed.) , 2001 .

[17]  Michael A. Cusumano,et al.  Software Development Worldwide: The State of the Practice , 2003, IEEE Softw..

[18]  Justin R. Erenkrantz Release Management Within Open Source Projects , 2003 .

[19]  Joel R. Levin,et al.  Teaching How to Derive Directly Interpretable Coding Schemes for Multiple Regression Analysis , 1985 .

[20]  Robert G. Ebenau,et al.  Software Inspection Process , 1993 .

[21]  David W. Bates,et al.  Position Paper: A Proposal for Electronic Medical Records in U.S. Primary Care , 2003, J. Am. Medical Informatics Assoc..

[22]  Harvey P. Siy,et al.  An experiment to assess the cost-benefits of code inspections in large scale software development , 1995, SIGSOFT '95.

[23]  Marc Roper,et al.  The Development and Evaluation of Three Diverse Techniques for Object-Oriented Code Inspection , 2003, IEEE Trans. Software Eng..

[24]  Thomas Gilb,et al.  Software Inspection , 1994 .

[25]  M. Egbring,et al.  Open source or commercial products for electronic data capture in clinical trials? a scorecard comparison , 2003, Computers in Cardiology, 2003.

[26]  Michael E. Fagan Design and Code Inspections to Reduce Errors in Program Development , 1976, IBM Syst. J..

[27]  M. Elliott,et al.  Conducting Research Surveys Via E-Mail and The Web , 2001 .

[28]  Saurabh Bagchi,et al.  Open source software-a recipe for vulnerable software,or the only way to keep the bugs and the bad guys out? , 2003, 14th International Symposium on Software Reliability Engineering, 2003. ISSRE 2003..

[29]  John C. Kelly,et al.  An analysis of defect densities found during software inspections , 1992, J. Syst. Softw..

[30]  Probert,et al.  Adopting new technology: the case of open source software at Marconi , 2003 .

[31]  Brian Fitzgerald,et al.  Why Hackers Do What They Do: Understanding Motivation and Effort in Free/Open Source Software Projects , 2007 .

[32]  Craig A. Wendorf Primer on Multiple Regression Coding: Common Forms and the Additional Case of Repeated Contrasts , 2004 .

[33]  David W. Hosmer,et al.  Applied Logistic Regression , 1991 .

[34]  Raymond Madachy,et al.  Analysis of a successful inspection program , 1993 .

[35]  James R. Lindner,et al.  HANDLING NONRESPONSE IN SOCIAL SCIENCE RESEARCH , 2001 .

[36]  Vijayan Sugumaran,et al.  A framework for creating hybrid‐open source software communities , 2002, Inf. Syst. J..

[37]  Eric S. Raymond,et al.  The cathedral and the bazaar - musings on Linux and Open Source by an accidental revolutionary , 2001 .

[38]  Budi Arief,et al.  focus developing with open source software The Many Meanings of Open Source , 2022 .

[39]  LynchNancy,et al.  NSF workshop on a software research program for the 21st century , 1999 .

[40]  Oliver Laitenberger,et al.  An encompassing life cycle centric survey of software inspection , 2000, J. Syst. Softw..

[41]  Michael A. Cusumano,et al.  Trade-offs between Productivity and Quality in Selecting Software Development Practices , 2003, IEEE Softw..

[42]  Jacqueline Stark Peer Reviews as a Quality Management Technique in Open-Source Software Development Projects , 2002, ECSQ.

[43]  Maurizio Morisio,et al.  Characteristics of open source projects , 2003, Seventh European Conference onSoftware Maintenance and Reengineering, 2003. Proceedings..

[44]  L. Petersen,et al.  Barriers to proliferation of electronic medical records. , 2004, Informatics in primary care.

[45]  Barton P. Miller,et al.  An empirical study of the reliability of UNIX utilities , 1990, Commun. ACM.

[46]  MillerJames,et al.  Comparing and combining software defect detection techniques , 1997 .

[47]  Audris Mockus,et al.  A case study of open source software development: the Apache server , 2000, Proceedings of the 2000 International Conference on Software Engineering. ICSE 2000 the New Millennium.

[48]  Mary Beth Chrissis,et al.  CMMI: Guidelines for Process Integration and Product Improvement , 2003 .

[49]  Claes Wohlin,et al.  An Experimental Evaluation of an Experience-Based Capture-Recapture Method in Software Code Inspections , 1998, Empirical Software Engineering.

[50]  Gunther Eysenbach,et al.  Improving the Quality of Web Surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES) , 2004, Journal of medical Internet research.

[51]  Janez Brest,et al.  Security issues in information systems based on open-source technologies , 2003, The IEEE Region 8 EUROCON 2003. Computer as a Tool..

[52]  Victor R. Basili,et al.  Comparing the Effectiveness of Software Testing Strategies , 1987, IEEE Transactions on Software Engineering.

[53]  Chris Sauer,et al.  Validating the defect detection performance advantage of group designs for software reviews: report of a laboratory experiment using program code , 1997, ESEC '97/FSE-5.

[54]  J. Herbsleb,et al.  Two case studies of open source software development: Apache and Mozilla , 2002, TSEM.

[55]  Edward F. Weller,et al.  Lessons from three years of inspection data (software development) , 1993, IEEE Software.

[56]  James Miller,et al.  Comparing and combining software defect detection techniques: a replicated empirical study , 1997, ESEC '97/FSE-5.

[57]  Bradley J Erickson,et al.  The role of open-source software in innovation and standardization in radiology. , 2005, Journal of the American College of Radiology : JACR.

[58]  Robert B. Grady,et al.  Practical Software Metrics for Project Management and Process Improvement , 1992 .

[59]  Karim R. Lakhani,et al.  Why Hackers Do What They Do: Understanding Motivation and Effort in Free/Open Source Software Projects , 2003 .

[60]  Niels Jørgensen,et al.  Putting it all in the trunk: incremental software development in the FreeBSD open source project , 2001, Inf. Syst. J..

[61]  Stephen G. Eick,et al.  Estimating software fault content before coding , 1992, International Conference on Software Engineering.

[62]  A. Frank Ackerman,et al.  Software inspections: an effective verification process , 1989, IEEE Software.

[63]  A. Finkelstein Report of the Inquiry into the London Ambulance Service , 1993 .

[64]  Claes Wohlin,et al.  Software inspection benchmarking-a qualitative and quantitative comparative opportunity , 2002, Proceedings Eighth IEEE Symposium on Software Metrics.

[65]  Alexander Hars,et al.  Working for free? Motivations of participating in open source projects , 2001, Proceedings of the 34th Annual Hawaii International Conference on System Sciences.

[66]  Harvey P. Siy,et al.  An Experiment ot Assess the Cost-Benefits of Code Inspections in Large Scale Software Development , 1997, IEEE Trans. Software Eng..

[67]  Glenford J. Myers,et al.  A controlled experiment in program testing and code walkthroughs/inspections , 1978, CACM.

[68]  J. Hox,et al.  Sufficient Sample Sizes for Multilevel Modeling , 2005 .

[69]  Sebastian G. Elbaum,et al.  Quality assurance under the open source development model , 2003, J. Syst. Softw..

[70]  Guido Hertel,et al.  Motivation of software developers in Open Source projects: an Internet-based survey of contributors to the Linux kernel , 2003 .