Panel: Context-Dependent Evaluation of Tools for NL RE Tasks: Recall vs. Precision, and Beyond

Context and Motivation Natural language processing has been used since the 1980s to construct tools for performing natural language (NL) requirements engineering (RE) tasks. The RE field has often adopted information retrieval (IR) algorithms for use in implementing these NL RE tools. Problem Traditionally, the methods for evaluating an NL RE tool have been inherited from the IR field without adapting them to the requirements of the RE context in which the NL RE tool is used. Principal Ideas This panel discusses the problem and considers the evaluation of tools for a number of NL RE tasks in a number of contexts. Contribution The discussion is aimed at helping the RE field begin to consistently evaluate each of its tools according to the requirements of the tool's task.

[1]  Andrea De Lucia,et al.  Information Retrieval Methods for Automated Traceability Recovery , 2012, Software and Systems Traceability.

[2]  Kevin Ryan,et al.  The role of natural language in requirements engineering , 1993, [1993] Proceedings of the IEEE International Symposium on Requirements Engineering.

[3]  Daniel M. Berry,et al.  Evaluation of Tools for Hairy Requirements Engineering and Software Engineering Tasks , 2017 .

[4]  Peter Sawyer,et al.  The Case for Dumb Requirements Engineering Tools , 2012, REFSQ.

[5]  Daniel M. Berry,et al.  Evaluation of Tools for Hairy Requirements and Software Engineering Tasks , 2017, 2017 IEEE 25th International Requirements Engineering Conference Workshops (REW).

[6]  Hinrich Schütze,et al.  Introduction to Information Retrieval: Evaluation in information retrieval , 2008 .

[7]  อนิรุธ สืบสิงห์,et al.  Data Mining Practical Machine Learning Tools and Techniques , 2014 .

[8]  Didar Zowghi,et al.  Supporting traceability through affinity mining , 2014, 2014 IEEE 22nd International Requirements Engineering Conference (RE).

[9]  Stefania Gnesi,et al.  An Automatic Quality Evaluation for Natural Language Requirements , 2001 .

[10]  Daniel M. Berry,et al.  The Design of SREE - A Prototype Potential Ambiguity Finder for Requirements Specifications and Lessons Learned , 2013, REFSQ.

[11]  Jane Cleland-Huang,et al.  A heterogeneous solution for improving the return on investment of requirements traceability , 2004, Proceedings. 12th IEEE International Requirements Engineering Conference, 2004..

[12]  Giuliano Antoniol,et al.  The quest for Ubiquity: A roadmap for software and systems traceability research , 2012, 2012 20th IEEE International Requirements Engineering Conference (RE).

[13]  Jane Huffman Hayes,et al.  Helping analysts trace requirements: an objective look , 2004, Proceedings. 12th IEEE International Requirements Engineering Conference, 2004..

[14]  Tefko Saracevic,et al.  Evaluation of evaluation in information retrieval , 1995, SIGIR '95.

[15]  Peter Sawyer,et al.  BEST PAPERS OF RE’10: REQUIREMENTS ENGINEERING IN A MULTI-FACETED WORLD Relevance-based abstraction identification: technique and evaluation , 2022 .

[16]  Jane Huffman Hayes,et al.  Advancing candidate link generation for requirements tracing: the study of methods , 2006, IEEE Transactions on Software Engineering.

[17]  Walid Maalej,et al.  On the automatic classification of app reviews , 2016, Requirements Engineering.

[18]  Daniel M. Berry,et al.  AbstFinder, a prototype abstraction finder for natural language text for use in requirements elicitation: design, methodology, and evaluation , 1994, Proceedings of IEEE International Conference on Requirements Engineering.

[19]  Daniel M. Berry,et al.  AbstFinder, A Prototype Natural Language Text Abstraction Finder for Use in Requirements Elicitation , 1997, Automated Software Engineering.