A FRAMEWORK FOR MEASURING QUALITY OF MODELS : EXPERIENCES FROM A SERIES OF CONTROLLED EXPERIMENTS

Tao Yue, Shaukat Ali, Maged Elaasar Certus Software V&V Center, Simula Research Laboratory, Oslo, Norway {tao, shaukat}@simula.no IBM Canada Ltd, Rational Software, Ottawa Lab, Canada melaasar@ca.ibm.com Abstract Controlled experiments in model-based software engineering, especially those involving human subjects performing modeling tasks, often require comparing models produced by experiment subjects with reference models, which are considered to be correct and complete. The purpose of such comparison is to assess the quality of models produced by experiment subjects so that experiment hypotheses can be accepted or rejected. The quality of models is typically measured quantitatively based on metrics. Manually defining such metrics for large modeling languages is often cumbersome and error-prone. It can also result in metrics that do not systematically consider relevant details and in turn may produce biased results.

[1]  Rachel Harrison,et al.  Experimental assessment of the effect of inheritance on the maintainability of object-oriented systems , 2000, J. Syst. Softw..

[2]  Lionel C. Briand,et al.  A Use Case Modeling Approach to Facilitate the Transition towards Analysis Models: Concepts and Empirical Evaluation , 2009, MoDELS.

[3]  Lionel C. Briand,et al.  A Controlled Experiment for Evaluating Quality Guidelines on the Maintainability of Object-Oriented Designs , 2001, IEEE Trans. Software Eng..

[4]  Zafar I. Malik,et al.  Comprehensively evaluating conformance error rates of applying aspect state machines , 2012, AOSD '12.

[5]  Mario Piattini,et al.  Empirical validation of class diagram metrics , 2002, Proceedings International Symposium on Empirical Software Engineering.

[6]  Lionel C. Briand,et al.  Modeling robustness behavior using aspect-oriented modeling to support robustness testing of industrial systems , 2011, Software & Systems Modeling.

[7]  Ralf Reissing,et al.  Towards a Model for Object-Oriented Design Measurement , 1996 .

[8]  Mario Piattini,et al.  Early Measures for UML Class Diagrams , 2000, Obj. Logiciel Base données Réseaux.

[9]  Giuseppe Santucci,et al.  Serious : a UML based metric for effort estimation , 2002 .

[10]  James M. Bieman,et al.  Cohesion and reuse in an object-oriented system , 1995, SSR '95.

[11]  Michele Marchesi OOA metrics for the Unified Modeling Language , 1998, Proceedings of the Second Euromicro Conference on Software Maintenance and Reengineering.

[12]  Lionel C. Briand,et al.  Assessing the Applicability of Fault-Proneness Models Across Object-Oriented Software Projects , 2002, IEEE Trans. Software Eng..

[13]  LerouxD.,et al.  Rational software architect , 2006 .

[14]  Cornelia Boldyreff,et al.  Developing Software Metrics Applicable to UML Models , 2002 .

[15]  Chris F. Kemerer,et al.  Towards a metrics suite for object oriented design , 2017, OOPSLA '91.

[16]  Luigi Lavazza,et al.  Automated Measurement of UML Models: an open toolset approach , 2005, J. Object Technol..

[17]  Claes Wohlin,et al.  Experimentation in software engineering: an introduction , 2000 .

[18]  Lionel C. Briand,et al.  Facilitating the transition from use case models to analysis models: Approach and experiments , 2013, TSEM.

[19]  Baowen Xu,et al.  A Structural Complexity Measure for UML Class Diagrams , 2004, International Conference on Computational Science.

[20]  Mario Piattini,et al.  The impact of structural complexity on the understandability of UML statechart diagrams , 2010, Inf. Sci..

[21]  Lionel C. Briand,et al.  Empirical Studies of Quality Models in Object-Oriented Systems , 2002, Adv. Comput..