Product focused software process improvement : SPI in the embedded software domain

Conceptualization Concrete Experience Divergent learning Accommodative learning Convergent learning Assimilative learning Figure 6-1: Experiential Learning [Kolb 1984] Following the different classes of experience and transformation, four different modes of learning are distinguished. These modes are: • ‘divergent learning’ during which observations are analysed • ‘assimilative learning’ during which models are built • ‘convergent learning’ during which models are tested in practice • ‘accommodative learning’ during which experiments are observed According to Kolb, the combination of these four modes of learning produces the highest level of learning. The combination requires the learning process to include: ‘ observing phenomena, analysing them, developing models and theories about them and testing these theories and models in practice’ [Kolb 1984]. In fact this is what was identified in chapter 5: product quality control in practice is enabled through model building and model testing of process product relationships. 6.1.2 Group learning When considering learning in software process improvement, it is important to realise that work is performed in an industrial environment. An industrial environment demands group learning. Software development and process improvement is carried out within teams, projects, departments or companies; it always concerns a group of people. The improvement objectives and learning processes are therefore shared. The term ‘group learning’ indicates that a set of people, over a period of time, share the same learning goals and learning process. In such a situation, knowledge has to be shared 82 PRODUCT FOCUSED SOFTWARE PROCESS IMPROVEMENT among organisational members and to contribute to the synergy of the organisation [Jelinek 1979]. This is also often termed: ‘organisational learning’. Organisational learning is defined as skilled process in which knowledge is created, acquired, and transferred, and through which behaviour is modified based on the new knowledge and insights [Garvin 1993]. It is important to note that organisations cannot learn: the individual people can learn and learn together [Weggeman 1997]. This definition reflects that learning happens when new insights arise. Sometimes they are newly created, sometimes they arrive from outside the organisation or are communicated by knowledgeable insiders. Such new insights are, however, not enough. Without accompanying changes in the way that work gets done, only the potential for improvement exists [Garvin 1993]. George Huber states similarly that learning occurs when ‘the potential behaviours are changed’ [Huber 1991]. Behaviour does not need to be changed for every situation, but the potential ways of working need to be expanded. So, effective learning results in altering (potential) behaviour. If behaviour is not changed, learning has apparently not occurred. Argyris and Schön make a distinction between two modes of learning: single loop and double loop [Argyris and Schön 1978]: • Single loop learning. This is learning in which the actor only learns within the confines of his or her theory in use. There is a focus on the operational level: based on detecting and correcting errors, competencies and routines. • Double loop learning. Double loop learning starts when an event is diagnosed as incompatible with the actor’s current theory in use. With double loop learning current theory and models are altered through new insights. In practice most organisations are only focussed on single loop learning [Argyris 1993]. Optimisation is only done within the current way of working. This in itself is not wrong. Through repetitive experiences, organisations get skilled in their work, and create competitive advantages based on these skills. Sometimes new approaches become available that an organisation has no experience with. In such cases it might be better to switch to such a new approach, because it fits better than the historic approaches. This is double loop learning, which many organisations tend to see as a threat because it conflicts with existing and established habits. It is also dangerous for an organisation to constantly adopt new ways of working, because all knowledge gained until then might immediately become outdated. ‘ The known can be in many situations be preferred over the unknown’ [March 1991]. A balance should be found in optimising the current processes (single loop learning) and experimenting with new theories and approaches to find out whether those are much better than existing ones (double loop learning). So, learning theory promotes a parallel application of optimisation LEARNING: THE BASIS OF IMPROVEMENT 83 of current practices and experimentation with new ones. This idea should be considered for the RPM conceptual model. The skills and capabilities of learning organisations are divided over three classes [Senge 1990]: • ‘aspiration’: the capacity of individuals, teams, and eventually larger organisations to orient toward what they truly care about, and to change because they want to, not just because they need to • ‘reflection and conversation’: the capacity to reflect on patterns of behaviour and assumptions deeply hidden in a persons behaviour, both individually and collectively • ‘conceptualisation’: the capacity to see larger systems and forces at play and to construct public, testable ways of expressing these views According to Senge, there are three groupings of learning skills. Firstly, there is the motivation to learn and improve. This includes having time for learning, learning objectives, interest in learning, etc. Management commitment for learning tasks is also one of the aspects that falls under aspiration. Secondly, there is the willingness to discuss deep assumptions. This is what Argyris and Schön call ‘double loop learning’. Finally, there is conceptualisation, which corresponds with model building and testing of the experiential learning theory [Kolb 1984]. These three skills and capabilities for establishing learning need to be addressed by the RPM conceptual model. Learning theory supports that a learning method should specify learning goals explicitly [Garvin 1993]. Defining these goals is difficult, but in a business environment it makes sense to base them on business goals. These goals can be different for different organisations. Differences include the market in which an organisation operates, the type of product that is delivered, the organisation of the development teams, or the country in which the products will be used. Learning practices should be directed to goals of the organisation, which can be made operational by, for example, managing on performance indicators [Garvin 1993]. Goals for learning always vary between organisations, because of different strategies [Agarwal et al.1997]. So, learning theory indicates that learning objectives should be situational, depending on the specific needs of an organisation. In chapter 5 it was indicated that the RPM conceptual model does not prescribe generic process improvement goals for all organisations. This is different from current SPI approaches that prescribe a generic set of priorities and sequence in which improvements should be implemented. In chapter 5 it was discussed that defining organisation specific improvement objectives is probably better. Learning theory appears to support this decision. The way in which these goals are reached is not prescribed: the learning process of each organisation can be different, because the context in which learning is established is also different. 84 PRODUCT FOCUSED SOFTWARE PROCESS IMPROVEMENT The final aspect of organisational learning relevant for this thesis is based on a phenomenon called ‘creative tension’ [Senge 1990]. This is the difference between current reality and a desired future. The gap between the current reality and the desired future should not be too large, because the objectives of the people become too abstract and concrete actions towards improvement are not clearly visible. On the other hand the gap between current reality and the desired future should not be too small either, because this will result in no action at all, since the need for action might seem unnecessary. Note the resemblance between ‘creative tension’ and ‘assessment based’ improvement programmes for software engineering, in which yearly benchmarks are used to set next years objectives. This creative tension principle will be adopted, because it appears practical to steer learning towards reachable objectives. So an improvement method for embedded product development should focus on goals that are reachable and within the principle of ‘creative tension’. Beside general guidelines on implementing organisational learning, learning theory also provides several criteria for successful learning. Such ‘learning enablers’ are also relevant for this thesis and are therefore be handled in the next section. 6.1.3 Proposed incorporation of learning concepts in the RPM conceptual model Based on the foregoing analysis of learning theory, the time has come to summarise which concepts need to be incorporated in the RPM conceptual model for product focused SPI, which was presented in chapter 5. The following learning concepts will be incorporated: 1. Learning is the process by which existing knowledge is enriched or new knowledge is created [Weggeman 1997]. Knowledge is the personal ability that enables a person to perform a certain task [Weggeman 1997]. Transferring implicit (tacit) knowledge is not addressed in this thesis. 2. Organisational learning is defined as a skilled process in which knowledge is created, acquired, and transferred, and through which (potential) behaviour is modified based on new knowledge [Garvin 1993]. Organisational learning is done within groups of people. Organisations do not learn, but the people in those organisations are learning. 3. During learning experiences are transformed into

[1]  Peretz Shoval,et al.  A combined methodology for information systems analysis and design based on ISAC and NIAM , 1986, Inf. Syst..

[2]  Robert L. Glass,et al.  Measuring software design quality , 1990 .

[3]  Susan Rosenbaum,et al.  Schlumberger's Software Improvement Program , 1994, IEEE Trans. Software Eng..

[4]  A. R. Ilersic,et al.  Research methods in social relations , 1961 .

[5]  Jjm Jos Trienekens,et al.  Software quality from a business perspective , 1997 .

[6]  T.M.A. Bemelmans,et al.  Bestuurlijke informatiesystemen en automatisering , 1994 .

[7]  Curtis R. Cook,et al.  Real-time software metrics , 1994, J. Syst. Softw..

[8]  Alan M. Davis,et al.  Identifying and measuring quality in a software requirements specification , 1993, [1993] Proceedings First International Software Metrics Symposium.

[9]  Veikko Seppänen,et al.  Practical process improvement for embedded real-time software , 1996 .

[10]  Fj Freek Erens The synthesis of variety : developing product families , 1996 .

[11]  Alan C. Gillies,et al.  Software Quality: Theory and Management , 1992 .

[12]  Derek J. Hatley,et al.  Strategies for Real-Time System Specification , 1987 .

[13]  Robert B. Grady,et al.  Practical Software Metrics for Project Management and Process Improvement , 1992 .

[14]  D. Ulrich Intellectual Capital = Competence x Commitment , 1998 .

[15]  Mary Shaw,et al.  Prospects for an engineering discipline of software , 1990, IEEE Software.

[16]  Russell Moseley Practice of science , 1985, Nature.

[17]  Shari Lawrence Pfleeger,et al.  Software Engineering: The Production of Quality Software , 1987 .

[18]  Hsiang-Tao Yeh Software Process Quality , 1993 .

[19]  Victor R. Basili,et al.  A Methodology for Collecting Valid Software Engineering Data , 1984, IEEE Transactions on Software Engineering.

[20]  Richard H. Thayer,et al.  Software Requirements Engineering Glossary , 2000 .

[21]  J. Ruiz Moreno [Organizational learning]. , 2001, Revista de enfermeria.

[22]  Julio Cesar Sampaio do Prado Leite A Survey on Requirements Analysis , 1987 .

[23]  Gerald M. Weinberg Quality software management (vol. 2): first-order measurement , 1993 .

[24]  Egon Berghout,et al.  Assessing feedback of measurement data: relating Schlumberger RPS practice to learning theory , 1997, Proceedings Fourth International Software Metrics Symposium.

[25]  Len Bass,et al.  Architecture-Based Development. , 1999 .

[26]  Mark C. Paulk,et al.  Software Product Evaluation , 2001 .

[27]  Shari Lawrence Pfleeger,et al.  Software Metrics : A Rigorous and Practical Approach , 1998 .

[28]  Paul Clements,et al.  Predicting software quality by architecture-level evaluation , 1995 .

[29]  Rini van Solingen,et al.  Product focused software process improvement (P-SPI) : concepts and their application , 1999 .

[30]  Henry Mintzberg,et al.  Structure in Fives: Designing Effective Organizations , 1983 .

[31]  Gerald Kotonya,et al.  Software Requirements Engineering , 1999 .

[32]  Chris Argyris,et al.  Reasoning, learning, and action , 1982 .

[33]  Donald A. Schön,et al.  Organizational Learning: A Theory Of Action Perspective , 1978 .

[34]  Robert B. Grady,et al.  Software Metrics: Establishing a Company-Wide Program , 1987 .

[35]  Mark C. Paulk,et al.  Capability Maturity Model for Software, Version 1.1 , 1993 .

[36]  Alan M. Davis,et al.  A Strategy for Comparing Alternative Software Development Life Cycle Models , 1988, IEEE Trans. Software Eng..

[37]  Bashar Nuseibeh,et al.  Viewpoints: A Framework for Integrating Multiple Perspectives in System Development , 1992, Int. J. Softw. Eng. Knowl. Eng..

[38]  Michiel van Genuchten,et al.  Software Quality in Consumer Electronics Products , 1996, IEEE Softw..

[39]  E. Ziegel Juran's Quality Control Handbook , 1988 .

[40]  Maurice H. Halstead,et al.  Elements of software science , 1977 .

[41]  W BoehmBarry A Spiral Model of Software Development and Enhancement , 1988 .

[42]  Stanislaw Wrycza,et al.  The ISAC-driven transition between requirements analysis and ER conceptual modelling , 1991, Inf. Syst..

[43]  TR,et al.  Information technology — Software process assessment — Part 2 : A reference model for processes and process capability , 1998 .

[44]  Markku Oivo Quantitative management of software production using object-oriented models , 1994 .

[45]  W. Wayt Gibbs,et al.  Software's Chronic Crisis , 1994 .

[46]  Marvin V. Zelkowitz,et al.  Principles of software engineering , 1979 .

[47]  Robert B. Grady,et al.  Successfully applying software metrics , 1994, Computer.

[48]  Andreas Birk,et al.  A validation approach for product-focused process improvement , 1999 .

[49]  Karen Ayas,et al.  Design for learning for innovation , 1997 .

[50]  Jawed I. A. Siddiqi,et al.  Requirements Engineering: The Emerging Wisdom , 1996, IEEE Softw..

[51]  Albert T. Kündig A Note on the Meaning of "Embedded Systems" , 1986, Embedded Systems.

[52]  Claes Wohlin,et al.  Experimentation in Software Engineering , 2000, The Kluwer International Series in Software Engineering.

[53]  Dick Bowman,et al.  Principles of software engineering management , 1989, APLQ.

[54]  M W Alford,et al.  Software Requirements Engineering Methodology , 2002 .

[55]  H. C. Steinz,et al.  Safety and Reliability Assessment on Products and Organisations , 1998 .

[56]  Mark C. Paulk,et al.  Key Practices of the Capability Maturity Model , 1991 .

[57]  Robin W. Whitty,et al.  Software quality assurance and measurement : a worldwide perspective , 1995 .

[58]  Michael Daskalantonakis,et al.  Achieving higher SEI levels , 1994, IEEE Software.

[59]  Norman E. Fenton Software Engineering Metrics, Volume 1: Measures and Validations, by Martin Sheppard, McGraw-Hill, 1993 (Book Review) , 1994, Softw. Test. Verification Reliab..

[60]  Victor R. Basili,et al.  The Experimental Paradigm in Software Engineering , 1992, Experimental Software Engineering Issues.

[61]  H. D. Rombach,et al.  THE EXPERIENCE FACTORY , 1999 .

[62]  John E. Gaffney,et al.  Software Function, Source Lines of Code, and Development Effort Prediction: A Software Science Validation , 1983, IEEE Transactions on Software Engineering.

[63]  Egon Berghout,et al.  Interrupts: Just a Minute Never Is , 1998, IEEE Softw..

[64]  Victor R. Basili,et al.  The TAME Project: Towards Improvement-Oriented Software Environments , 1988, IEEE Trans. Software Eng..

[65]  Raymond Dion,et al.  Process improvement and the corporate balance sheet , 1993, IEEE Software.

[66]  William C. Hetzel The sorry state of software practice measurement and evaluation , 1995, J. Syst. Softw..

[67]  Joseph P. Cavano,et al.  A framework for the measurement of software quality , 1978, SIGMETRICS Perform. Evaluation Rev..

[68]  Pasi Kuvaja,et al.  Product Focused Process Improvement : Experiences of Applying the PROFES Improvement Methodology at DRÄGER , 1999 .

[69]  James E. Rumbaugh,et al.  Getting Started: Using Use Cases to Capture Requirements , 1994, J. Object Oriented Program..

[70]  P. Senge The Fifth Discipline Fieldbook: Strategies and Tools for Building a Learning Organization , 2014 .

[71]  Gordon B. Davis,et al.  Management information systems : conceptual foundations, structure, and development , 1985 .

[72]  Robert L. Glass,et al.  Software Creativity , 1995 .

[73]  Victor R. Basili,et al.  Software Quality: An Overview from the Perspective of Total Quality Management , 1994, IBM Syst. J..

[74]  D. Dunn,et al.  Experiential Learning , 2019, High Impact Teaching for Sport and Exercise Psychology Educators.

[75]  Mark C. Paulk,et al.  Capability Maturity Model for Software , 2001 .

[76]  Barry W. Boehm,et al.  A spiral model of software development and enhancement , 1986, Computer.

[77]  Rj Rob Kusters,et al.  From quality requirement factor to quality factor: an end-user based method , 1995 .

[78]  D. Benyon,et al.  Towards a Tool Kit for the Systems Analyst , 1987, Comput. J..

[79]  J. Stanley Quasi-Experimentation , 1965, The School Review.

[80]  Aarnout Brombacher,et al.  Systematic failures in safety systems : how to analyse, how to optimise , 1996 .

[81]  George E. Stark,et al.  Using metrics in management decision making , 1994, Computer.

[82]  Dietmar Pfahl,et al.  Experience with explicit modelling of relationships between process and product quality , 1998 .

[83]  Egon Berghout,et al.  The Goal/Question/Metric method: a practical guide for quality improvement of software development , 1999 .

[84]  Gary James Jason,et al.  The Logic of Scientific Discovery , 1988 .

[85]  Pasi Kuvaja,et al.  Bootstrap 3.0 — Software Process Assessment Methodology , 1998 .

[86]  P. Kidwell,et al.  The mythical man-month: Essays on software engineering , 1996, IEEE Annals of the History of Computing.

[87]  Watts S. Humphrey,et al.  Software process improvement at Hughes Aircraft , 1991, IEEE Software.

[88]  Michiel van Genuchten,et al.  Towards a Software Factory , 1992, Springer Netherlands.

[89]  Marvin V. Zelkowitz,et al.  Software Process Improvement in the NASA Software Engineering Laboratory , 1994 .

[90]  G. Huber Organizational Learning: The Contributing Processes and the Literatures , 1991 .

[91]  P. Senge The leader's new work: Building learning organizations. , 1998 .

[92]  Natasa Rupcic,et al.  The fifth discipline-the art and practice of the learning organisation , 2002 .

[93]  Barry W. Boehm,et al.  Software Engineering Economics , 1993, IEEE Transactions on Software Engineering.

[94]  Lionel C. Briand,et al.  Practical guidelines for measurement-based process improvement , 1996, Softw. Process. Improv. Pract..

[95]  R.J. Kusters,et al.  User-perceptions of embedded software quality , 1997, Proceedings Eighth IEEE International Workshop on Software Technology and Engineering Practice incorporating Computer Aided Software Engineering.

[96]  Markku Oivo,et al.  No Improvement Without Feedback: Experiences from Goal-Oriented Measurement at Schlumberger , 1996, EWSPT.

[97]  Ware Myers,et al.  Measures for Excellence: Reliable Software on Time, Within Budget , 1991 .

[98]  Shari Lawrence Pfleeger,et al.  Measurement based process improvement , 1994, IEEE Software.

[99]  Noel Entwistle,et al.  Styles of learning and teaching , 1981 .

[100]  Joseph A. Goguen,et al.  Techniques for requirements elicitation , 1993, [1993] Proceedings of the IEEE International Symposium on Requirements Engineering.

[101]  W. W. Royce,et al.  Managing the development of large software systems: concepts and techniques , 1987, ICSE '87.

[102]  RJ Rob Kusters,et al.  Identifying embedded software quality: two approaches , 1999 .

[103]  Gordon B. Davis,et al.  Strategies for Information Requirements Determination , 1982, IBM Syst. J..

[104]  Markku Oivo,et al.  Adopting GQM-Based Measurement in an Industrial Environment , 1998, IEEE Softw..

[105]  Teade Punter,et al.  The MEMA-model: towards a new approach for Method Engineering , 1996, Inf. Softw. Technol..

[106]  Anita D. Carleton,et al.  Case studies of software-process-improvement measurement , 1994, Computer.

[107]  Paul Goodman Practical Implementation of Software Metrics , 1993 .

[108]  Gerald M. Weinberg,et al.  Quality Software Management Volume 1: Systems Thinking , 1991 .

[109]  Martin Shepperd,et al.  Derivation and Validation of Software Metrics , 1993 .

[110]  Ritu Agarwal,et al.  Infusing learning into the information systems organization , 1997 .

[111]  Paul Rook,et al.  Controlling software projects , 1986, Softw. Eng. J..

[112]  Paul Clements,et al.  Software Architecture: An Executive Overview , 1996 .

[113]  A.P.M. van Uijtregt Prodoct focused software process improvement : integrating SPI and SPQ approaches into a quality improvement method for RPS , 1998 .

[114]  Joscha Bach,et al.  The Immaturity of the CMM , 1994 .

[115]  Farhad Analoui,et al.  Training and transfer of learning , 1994 .

[116]  Pieter Derks,et al.  Product focused SPI in the embedded systems industry - Experiences of Dr?ger, Ericsson and Tokheim , 1999 .

[117]  Sjaak Brinkkemper,et al.  Method engineering: engineering of information systems development methods and tools , 1996, Inf. Softw. Technol..

[118]  M. Shepperd,et al.  Practical software metrics for project management and process improvement: R Grady Prentice-Hall (1992) £30.95 282 pp ISBN 0 13 720384 5 , 1993, Inf. Softw. Technol..

[119]  James D. Herbsleb,et al.  After the Appraisal: A Systematic Survey of Process Improvement, its Benefits, and Factors that Influence Success. , 1995 .

[120]  Teresa M. Amabile,et al.  How to kill creativity. , 1998, Harvard business review.

[121]  Willoughby Reliability by Design , 1978 .

[122]  Dietmar Pfahl,et al.  A product-process dependency definition method , 1998, Proceedings. 24th EUROMICRO Conference (Cat. No.98EX204).

[123]  Horst Zuse,et al.  Software complexity: Measures and methods , 1990 .

[124]  John R. Anderson Cognitive Psychology and Its Implications , 1980 .

[125]  Milton Harris,et al.  Organization Design , 2000, Manag. Sci..

[126]  W. Morven Gentleman If software quality is a perception, how do we measure it? , 1996, Quality of Numerical Software.

[127]  Ian Sommerville,et al.  Requirements engineering with viewpoints , 1996, Softw. Eng. J..

[128]  Norman E. Fenton,et al.  Software measurement: A conceptual framework , 1990, J. Syst. Softw..

[129]  David Zubrow,et al.  Moving On Up: Data and Experience Doing CMM-Based Process Improvement , 1995 .

[130]  Daniel J. Paulish,et al.  Software metrics - a practitioner's guide to improved product development , 1993, Chapman & Hall computing series.