Model-based tests of truisms

Software engineering (SE) truisms capture broadly-applicable principles of software construction. The trouble with truisms is that such general principles may not apply in specific cases. This paper tests the specificity of two SE truisms: (a) increasing software process level is a desirable goal; and (b) it is best to remove errors during the early parts of a software lifecycle. Our tests are based on two well-established SE models: (1) Boehm et.al.'s COCOMO II cost estimation model; and (2) Raffo's discrete event software process model of a software project life cycle. After extensive simulations of these models, the TAR2 treatment learner was applied to find the model parameters that most improved the potential performance of the real-world systems being modelled. The case studies presented here showed that these truisms are clearly sub-optimal for certain projects since other factors proved to be far more critical. Hence, we advise against truism-based process improvement. This paper offers a general alternative framework for model-based assessment of methods to improve software quality: modelling + validation + simulation + sensitivity. That is, after recording what is known in a model, that model should be validated, explored using simulations, then summarized to find the key factors that most improve model behavior.

[1]  Leon J. Osterweil,et al.  Little-JIL/Juliette: a process definition language and interpreter , 2000, Proceedings of the 2000 International Conference on Software Engineering. ICSE 2000 the New Millennium.

[2]  簡聰富,et al.  物件導向軟體之架構(Object-Oriented Software Construction)探討 , 1989 .

[3]  Averill M. Law,et al.  Simulation Modeling and Analysis , 1982 .

[4]  Barry W. Boehm,et al.  Bayesian Analysis of Empirical Software Engineering Cost Models , 1999, IEEE Trans. Software Eng..

[5]  John D. Musa,et al.  Software-Reliability-Engineered Testing , 1996, Computer.

[6]  Mark C. Paulk,et al.  Capability Maturity Model , 1991 .

[7]  Michael E. Fagan Advances in software inspections , 1986, IEEE Transactions on Software Engineering.

[8]  Raymond J. Madachy,et al.  Software process simulation modeling: Why? What? How? , 1999, J. Syst. Softw..

[9]  Kenneth W. Boyer CMMI distilled , 2002, SOEN.

[10]  David W. Binkley,et al.  Interprocedural slicing using dependence graphs , 1988, SIGP.

[11]  Tim Menzies,et al.  Evaluation Issues for Visual Programming Languages , 2000 .

[12]  Donald J. Reifer,et al.  Software Failure Modes and Effects Analysis , 1979, IEEE Transactions on Reliability.

[13]  Walter J. Gutjahr,et al.  Partition Testing vs. Random Testing: The Influence of Uncertainty , 1999, IEEE Trans. Software Eng..

[14]  A. Pnueli,et al.  STATEMATE: a working environment for the development of complex reactive systems , 1988, [1988] Proceedings. The Third Israel Conference on Computer Systems and Software Engineering.

[15]  Tim Menzies,et al.  Condensing Uncertainty via Incremental Treatment Learning , 2003 .

[16]  David Raffo,et al.  Modeling software processes quantitatively and assessing the impact of potential process changes on process performance , 1996 .

[17]  Leon J. Osterweil,et al.  Using Little-JIL to coordinate agents in software engineering , 2000 .

[18]  John D. Sterman,et al.  Business dynamics : systems thinking and modelling for acomplex world , 2002 .

[19]  Klaus Havelund,et al.  Model checking JAVA programs using JAVA PathFinder , 2000, International Journal on Software Tools for Technology Transfer.

[20]  Stuart E. Madnick,et al.  Software Project Dynamics: An Integrated Approach , 1991 .

[21]  Les Hatton,et al.  Does OO Sync with How We Think? , 1998, IEEE Softw..

[22]  Randall P. Sadowski,et al.  Simulation with Arena , 1998 .

[23]  Gerard J. Holzmann,et al.  Validating requirements for fault tolerant systems using model checking , 1998, Proceedings of IEEE International Symposium on Requirements Engineering: RE '98.

[24]  Norman E. Fenton,et al.  Software metrics: roadmap , 2000, ICSE '00.

[25]  Michael R. Lowry,et al.  Towards a theory for integration of mathematical verification and empirical testing , 1998, Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239).

[26]  Tim Menzies,et al.  Fast formal analysis of requirements via "topoi diagrams" , 2001, Proceedings of the 23rd International Conference on Software Engineering. ICSE 2001.

[27]  Tim Menzies,et al.  Reusing Models For Requirements Engineering , 2001 .

[28]  Ivan Bratko,et al.  Prolog Programming for Artificial Intelligence , 1986 .

[29]  Ellis Horowitz,et al.  Cocomo ii model definition manual , 1998 .

[30]  David Raffo,et al.  A model of the software development process using both continuous and discrete models , 2000 .

[31]  Norman E. Fenton,et al.  A Critique of Software Defect Prediction Models , 1999, IEEE Trans. Software Eng..

[32]  B. K. Ghosh,et al.  Simulation Using Promodel , 2000 .

[33]  Walt Scacchi,et al.  A Knowledge-Based Environment for Modeling and Simulating Software Engineering Processes , 1990, IEEE Trans. Knowl. Data Eng..

[34]  Bojan Cukic,et al.  How Many Tests are Enough , 2000 .

[35]  Bertrand Meyer,et al.  Object-Oriented Software Construction, 2nd Edition , 1997 .

[36]  Ying Hu,et al.  Just Enough Learning ( of Association Rules ) , 2022 .

[37]  Norman E. Fenton,et al.  Software Metrics: A Rigorous Approach , 1991 .

[38]  Tim Menzies,et al.  Converging on the optimal attainment of requirements , 2002, Proceedings IEEE Joint International Conference on Requirements Engineering.

[39]  Steffen Bayer,et al.  Business dynamics: Systems thinking and modeling for a complex world , 2004 .

[40]  David Raffo,et al.  Software process simulation to achieve higher CMM levels , 1999, J. Syst. Softw..

[41]  Maarten Sierhuis,et al.  Brahms: simulating practice for work systems design , 1998, Int. J. Hum. Comput. Stud..

[42]  Evidence Supports Some Truisms, Belies Others , 1988, IEEE Softw..

[43]  Raymond J. Madachy,et al.  Heuristic Risk Assessment Using Cost Factors , 1997, IEEE Softw..