Evaluating Open Source Software through Prototyping

The increasing number of high quality open source software (OSS) components lets industrial organizations seriously consider integrating them into their software solutions for critical business cases. But thorough considerations have to be undertaken to choose the “right” OSS component for a specifi c business case. OSS components need to fulfi ll specifi c functional and non-functional requirements, must fi t into a planned architecture, and must comply with context factors in a specifi c environment. This chapter introduces a prototyping approach to evaluate OSS components. The prototyping approach provides decision makers with context-specifi c evaluation results and a prototype for demonstration purposes. The approach can be used by industrial organizations to decide on the feasibility of OSS components in their concrete business cases. We present one of the industrial case studies we conducted in a practical course at the University of Kaiserslautern to demonstrate the application of our approach in practice. This case study shows that even inexperienced developers like students can produce valuable evaluation results for an industrial customer that wants to use open source components. Copyright © 2007, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited. 270 Evaluating Open Source Software through Prototyping EVALUATING OPEN SOURCE SOFTWARE THROUGH PROTOTYPING There is an increasing number of open source software (OSS) projects that release software components which provide almost complete sets of functionality required in particular domains. These components are often also of such high quality that in more and more cases industry is seriously considering to use them as part of their commercial products. In such scenarios, OSS components must certainly compete with any similar component on the market including other OSS projects and commercial solutions. The model behind OSS is generally more attractive to companies than commercial business models, especially for small and medium-sized companies, due to the free distribution of OSS, the full access to sources and documentation, as well as quick responses and support by the community consisting of developers and other users. The implementation of this OSS model and the quality of the software, however, varies signifi cantly from one OSS project to another. Hence, it is crucial for an organization to systematically investigate the implementation of the OSS model for the particular OSS projects whose software it considers to reuse. Reusability of any type of software (including OSS, in particular) depends on the quality of the software itself as well as that of its documentation. Code quality is affected, for example, by code comments, structuring, coding style, and so forth. The quality of available documentation is defi ned by its readability, comprehensibility, or technical quality, and by its suitability for the intended reuse scenarios involving OSS. Besides documentation, the community supporting particular OSS projects is a crucial element, too. Response time and quality of community feedback depend on the overall size of the user group and the skill level of its members. All these aspects should be explicitly evaluated before an OSS is reused in real projects, let alone in critical projects. Note that all of these aspects may not only vary signifi cantly from one OSS project to another, but also heavily depend on the concrete context and reuse scenarios of the OSS. This chapter reports on a way to evaluate OSS in a holistic way. That is, OSS components are fi rstly evaluated like any other potential COTS (commercial off-the-shelf) component; and secondly they are used in a prototype project similar to, but smaller than the intended product developments, including an evaluation of the product in the context of the projected architecture to avoid architectural mismatches, as well as an evaluation of the support provided by the related community. To minimize the costs of such a pre-project evaluation, an evaluation team consisting of a group of graduate computer science students may be deployed. A prototyping approach can also be used to gather more detailed information on the adequacy of COTS components for a specifi c context. But especially for the selection of OSS components a prototyping approach pays off. The quality of the source code and the development documentation can be evaluated, for instance. This increases trust in the quality of the component. Furthermore, it is even possible to evaluate if the OSS component can be easily adapted by oneself to better fulfi ll the specifi c requirements. The chapter presents experience from several OSS evaluation projects performed during the last few years in the context of a one-semester practical course on software engineering at the University of Kaiserslautern. The systematic and sound evaluation was supported by researchers of the Fraunhofer Institute for Experimental Software Engineering (IESE). Each evaluation employed a temporary team of students to conduct a feasibility study, that is, realizing a prototypical solution based on OSS to be evaluated as specifi ed by industrial stakeholders. The industrial stakeholder always provided a set of functional and quality requirements and a projected architecture for the envisioned prod11 more pages are available in the full version of this document, which may be purchased using the "Add to Cart" button on the product's webpage: www.igi-global.com/chapter/evaluating-open-source-softwarethrough/21194?camid=4v1 This title is available in InfoSci-Books, InfoSci-Software Technologies, Business-Technology-Solution, Science, Engineering, and Information Technology, Socio-Economic Impacts of Technology, InfoSci-Select, InfoSciComputer Science. Recommend this product to your librarian: www.igi-global.com/e-resources/library-recommendation/?id=1

[1]  Forrest Shull,et al.  An evolutionary testbed for software technology evaluation , 2005, Innovations in Systems and Software Engineering.

[2]  Katsuro Inoue,et al.  On Automatic Categorization of Open Source Software , 2003 .

[3]  M. Ochs,et al.  A method for efficient measurement-based COTS assessment and selection method description and evaluation results , 2001, Proceedings Seventh International Software Metrics Symposium.

[4]  Barry Boehm,et al.  Foundations of Empirical Software Engineering , 2005 .

[5]  Marco Torchiano,et al.  An Empirical Study on Off-the-Shelf Component Usage in Industrial Projects , 2005, PROFES.

[6]  Marco Scotto,et al.  Agile Technologies in Open Source Development , 2009 .

[7]  Cornelius Ncube,et al.  PORE : Procurement Oriented Requirements Engineering Method for the Component-Based Systems Engineering Development Paradigm , 1999 .

[8]  Albert Endres,et al.  A handbook of software and systems engineering - empirical observations, laws and theories , 2003, The Fraunhofer IESE series on software engineering.

[9]  Piergiorgio Di Giacomo,et al.  COTS and Open Source Software Components: Are They Really Different on the Battlefield? , 2005, ICCBSS.

[10]  Christof Ebert,et al.  Using open source software in product development: a primer , 2004, IEEE Software.

[11]  H. D. Rombach,et al.  The Goal Question Metric Approach , 1994 .

[12]  Giancarlo Succi,et al.  An empirical study of open-source and closed-source software products , 2004, IEEE Transactions on Software Engineering.

[13]  Chen Wang,et al.  Open Source Software Adoption: A Status Report , 2001, IEEE Softw..

[14]  K. Amant,et al.  Handbook of Research on Open Source Software: Technological, Economic, and Social Perspectives , 2007 .

[15]  W. Edwards Deming,et al.  Out of the Crisis , 1982 .