Software Process Improvement in Small Organizations Using Gradual Evaluation Schema

This paper relates a technology transfer experience which aims at supporting the introduction of software process improvement in small businesses, small organizations and/or small projects. The experience is born from a European interregional collaboration between two university research teams (France and Belgium) and a public technology center (Luxembourg). One of the contributions of this experience is the design of a Software Process Improvement approach particularly adapted to small units on the one hand, and to regional context, on the other hand. The proposed approach is gradual. It is based on three nested evaluation models ranging from an extremely simplified model (the micro-evaluation model) to a complete standard model supporting SPICE. The intermediate model, called the mini-evaluation model, can be viewed as a tailoring of SPICE and can be used by itself as a definitive model by small businesses and small organizations. 1. Context and Motivation The project is mainly addressed to the Small and Medium Enterprises (SMEs) and small public organizations of the Walloon region, i.e., the French speaking part of Belgium, which is one of the oldest industrial region in Europe. Similarly to other old European industrial basins, the region suffers from heavy aged industrial structures, e.g., iron and steel industry, coal-mining... The region is achieving a phase of slow conversion to modern industrial structures including small businesses which are active, among other, in the domain of Information Technology (IT). The main characteristics of the regional environment are the persistence of some old-fashioned bureaucratic management style, the coexistence of new small dynamic businesses and old big industries, the small size of IT businesses and the very small size of the majority of IT units in other industries and in public organizations. A regional study made by the Technology Assessment Group (CITA) of our university about Walloon SMEs [1] gives some significant data: in about 30% of businesses, only one person has software (in general) in his charges; and among the SMEs developing and/or managing Information Technology, 60% achieve these tasks with less than 5 persons. Such a very small size makes businesses highly dependent on some projects, some actors and/or on some technical capabilities, though they could be sometimes very innovative in their domains. Another characteristic of the SMEs of that region lies in the fact that they are surrounded by rapid growing dynamic regions (French Lorraine Region, Grand Duchy of Luxembourg,...) and they evolve in a European context where the market is more and more open, and consequently, with an increasing competition. In this context, it is obvious that software quality in general becomes a crucial issue for Walloon SMEs even though their resources are very limited. The OWPL project, supported by a public funding of the Walloon region, aims at assisting SMEs in their Software Process Improvement (SPI). In particular, the main goal is to provide SMEs and small public organizations with very simplified adequate models to initiate SPI approaches. In fact, standard models like CMM were initially designed for bigger structures. So, they should be, more or less deeply, tailored and/or adapted to very small organizations like our target SMEs. The first reason is the cost of an evaluation process (+/25000$) and its duration (+/8 month) [2] which are disproportional to the available resources. In addition, the maturity level our target SMEs would get according a general assessment model like CMM, would be very low. Brodman and Johnson ([3],[4]) show that a great number of process improvement plans based on the CMM encountered problems and that an important rate of those problems (53%) were related to the size. The success of a CMM process improvement plan actually grows with the number of people having software process in charge. There is also a similar need of adaptation with the SPICE model, even though this model is intended to be suitable to SMEs. The cost and effort remain too much important for very small organizations. A very simple adapted model would be more suited for them (at least) as a starting point. Another important point, lies in that the number of actors involved in software process is very small. Several roles can be in charge of the same single person . This makes the use of such models very complex for small organizations. In addition, actors in SMEs are far from being all Software Engineering specialists ; so adapting the vocabulary is necessary to allow the model to be used for selfassessment or for an assessment with a light support. 1 The acronym OWPL stands for Obsrevatoire Wallon des Pratiques Logicielles, i.e., Walloon Observatory for Software Practices . In summary, regional SMEs have a critical lack of software process improvement in order to be competitive in a short or medium term. But, due to their very small sizes and their limited resources, they need an adapted model they can put in practice immediately and in a simple way. The remainder of this paper describes the experience of the OWPL project whose aim is namely to produce and experiment such a tailored model. The project is undertaken by the Technology Transfer Center of the university of Namur and funded by the Walloon Region (Belgium). Meanwhile, our center collaborates with the University of Nancy (France) and the Center of Public Research of the Grand-Duchy of Luxembourg in a European ESSI project SPIRAL*NET. This project has the more general goal to increase the visibility of regional SMEs and to improve the SMEs software process in general by the generalization of their best practices. The target of the European project is the French speaking area composed of the Grand Duchy of Luxembourg, the Walloon part of Belgium and the French Lorraine. 2. The OWPL Approach The main original idea of the OWPL approach of software process evaluation and improvement is to proceed using three nested models which can be used either separately or as successive stages in the SPI. 1. A first extremely simplified model (called the micro-evaluation model) which is designed to have as lower cost as possible but also to allow giving a first pertinent diagnostic to the assessed organization. The rationale is twofold, to make the assessed SME aware of its weakness but also of the potential effective improvement it can expect, on the one hand, and to determine the priorities of subsequent stages of evaluation and improvement procedures, on the other hand. 2 SPIRAL*NET is the ESSI ESBNET project 27884. 2. An intermediate model (called the mini-evaluation model) which is the core of the OWPL approach. This model can be viewed as a tailoring of SPICE model (with significant influence of CMM and Bootstrap) particularly adapted to the context described in the above section. This model can be used by itself and would be sufficient for the majority of small businesses and small organizations. It can also be used as a stage that prepares a full evaluation according to one of the standard models. 3. The third model is the evaluation model we propose to organizations having a certain maturity level and seeking for a more in depth evaluation of one or more selected processes in reference to an international standard . In such cases we propose the use of the SPICE model. Hereafter we give some details about the three nested models we propose. 2.1 The micro-evaluation model The aim of the micro-evaluation is to give a first outlook of the evaluated organization, to make a diagnostic and guide the next steps of software process improvement. The main requirement that drives the design of this model is to be as less costly as possible, in time and money. So, the designed model corresponds to a half an hour interview based on a well-prepared questionnaire. The questionnaire covers six key axes we select as the most pertinent and the most prior to our target organizations on basis of former experience with SMEs evaluation. These axes are the following: 1. quality assurance, 2. customers management, 3. subcontractors management, 4. project management, 5. product management, and 6. training and human resources management. The questionnaire includes a few dozens of questions covering the axes above. Questions are open, and each of them is associated with one or more sub-questions allowing the interviewer, if need be, to adjust and refine the information he gets. Evaluations are performed by members of our software quality team, the interviewed person should be the one who has the software quality in his charges in the evaluated organization ; this corresponds usually to one of the executive staff members or to the quality engineer, if this function exists. Answers are interpreted according to a fixed grid. Two types of questions can be distinguished. On the one hand, questions that concern essential practices related to the general organization are rated on a linear scale according to the quality of the practice assessed. On the other hand, questions that concern the software practices are rated in a double-entry grid according to the quality of the practice and to its effective implementation in the evaluated organization (only for some critical projects, for all projects,...). Detailed description of the micro-model can be found in [13]. The result of the micro-evaluation is drawn up in a report of a dozen of pages. A typical report first presents briefly the approach, then it develops the results of the questionnaire and summarizes them according to the six axes, then it analyses those results according the situation of the evaluated organization (the age, the history, the declared goals,..) and finally gives some recommendations to help the assessed unit to improve. The micro-model has been experimented on a sample of two dozens of representative organizations (IT small companies, IT services in other businesses, publi

[1]  Watts S. Humphrey,et al.  Managing the software process , 1989, The SEI series in software engineering.

[2]  G. R. Koch,et al.  Process assessment: the 'BOOTSTRAP' approach , 1993, Inf. Softw. Technol..

[3]  J. G. Brodman,et al.  What small businesses and small organizations say about the CMM , 1994, Proceedings of 16th International Conference on Software Engineering.

[4]  Joscha Bach,et al.  The Immaturity of the CMM , 1994 .