Detecting Bad Smells in Use Case Descriptions

Use case modeling is very popular to represent the functionality of the system to be developed, and it consists of two parts: use case diagram and use case description. Use case descriptions are written in structured natural language (NL), and the usage of NL can lead to poor descriptions such as ambiguous, inconsistent and/or incomplete descriptions, etc. Poor descriptions lead to missing requirements and eliciting incorrect requirements as well as less comprehensiveness of produced use case models. This paper proposes a technique to automate detecting bad smells of use case descriptions, symptoms of poor descriptions. At first, to clarify bad smells, we analyzed existing use case models to discover poor use case descriptions concretely and developed the list of bad smells, i.e., a catalogue of bad smells. Some of the bad smells can be refined into measures using the Goal-Question-Metric paradigm to automate their detection. The main contribution of this paper is the automated detection of bad smells. We have implemented an automated smell detector for 22 bad smells at first and assessed its usefulness by an experiment. As a result, the first version of our tool got a precision ratio of 0.591 and recall ratio of 0.981.

[1]  Gregory Butler,et al.  Use case refactoring: a tool and a case study , 2004, 11th Asia-Pacific Software Engineering Conference.

[2]  Carlos José Pereira de Lucena,et al.  Refactoring product lines , 2006, GPCE '06.

[3]  Geri Schneider,et al.  Applying Use Cases: A Practical Guide , 1998 .

[4]  Lerina Aversano,et al.  An approach for restructuring text content , 2013, 2013 35th International Conference on Software Engineering (ICSE).

[5]  Shinpei Hayashi,et al.  Detecting Bad Smells of Refinement in Goal-Oriented Requirements Analysis , 2017, ER Workshops.

[6]  Keith Phalp,et al.  Assessing the quality of use case descriptions , 2007, Software Quality Journal.

[7]  Russ Miles,et al.  Learning UML 2.0 - a pragmatic introduction to UML , 2006 .

[8]  Stanley M. Sutton,et al.  Text2Test: Automated Inspection of Natural Language Use Cases , 2010, 2010 Third International Conference on Software Testing, Verification and Validation.

[9]  Wei Yu,et al.  Refactoring use case models on episodes , 2004 .

[10]  Stefania Gnesi,et al.  Applications of linguistic techniques for use case analysis , 2003, Requirements Engineering.

[11]  Bashar Nuseibeh,et al.  Automatic detection of nocuous coordination ambiguities in natural language requirements , 2010, ASE '10.

[12]  Mauricio A. Saca Refactoring improving the design of existing code , 2017, 2017 IEEE 37th Central America and Panama Convention (CONCAPAN XXXVII).

[13]  Yue Zhang,et al.  Automatic early defects detection in use case documents , 2014, ASE.

[14]  Elliotte Rusty Harold,et al.  Refactoring HTML: Improving the Design of Existing Web Applications (The Addison-Wesley Signature Series) , 2008 .

[15]  Bashar Nuseibeh,et al.  Extending Nocuous Ambiguity Analysis for Anaphora in Natural Language Requirements , 2010, 2010 18th IEEE International Requirements Engineering Conference.

[16]  Bente Anda,et al.  Towards an inspection technique for use case models , 2002, SEKE '02.

[17]  Mario Luca Bernardi,et al.  Improving the Design of Existing Web Applications , 2010, 2010 Seventh International Conference on the Quality of Information and Communications Technology.

[18]  Fredrik Törner,et al.  Defects in automotive use cases , 2006, ISESE '06.

[19]  Mohamed El-Attar,et al.  Improving the quality of use case models using antipatterns , 2010, Software & Systems Modeling.