Evaluating the Effects of Different Requirements Representations on Writing Test Cases

[Context and Motivation] One must test a system to ensure that the requirements are met, thus, tests are often derived manually from requirements. However, requirements representations are diverse; from traditional IEEE-style text, to models, to agile user stories, the RE community of research and practice has explored various ways to capture requirements. [Question/problem] But, do these different representations influence the quality or coverage of test suites? The state-of-the-art does not provide insights on whether or not the representation of requirements has an impact on the coverage, quality, or size of the resulting test suite. [Results] In this paper, we report on a family of three experiment replications conducted with 148 students which examines the effect of different requirements representations on test creation. We find that, in general, the different requirements representations have no statistically significant impact on the number of derived tests, but specific affordances of the representation effect test quality, e.g., traditional textual requirements make it easier to derive less abstract tests, whereas goal models yield less inconsistent test purpose descriptions. [Contribution] Our findings give insights on the effects of requirements representation on test derivation for novice testers. Our work is limited in the use of students.

[1]  Yann-Gaël Guéhéneuc,et al.  An empirical study on the efficiency of graphical vs. textual representations in requirements comprehension , 2013, 2013 21st International Conference on Program Comprehension (ICPC).

[2]  Richard Torkar,et al.  Evolution of statistical analysis in empirical software engineering research: Current state and steps forward , 2017, J. Syst. Softw..

[3]  Shaukat Ali,et al.  Formalizing the ISO/IEC/IEEE 29119 Software Testing Standard , 2015, 2015 ACM/IEEE 18th International Conference on Model Driven Engineering Languages and Systems (MODELS).

[4]  Annie I. Antón,et al.  Evaluating Legal Implementation Readiness Decision-Making , 2015, IEEE Transactions on Software Engineering.

[5]  J. Fleiss,et al.  Statistical methods for rates and proportions , 1973 .

[6]  Per Runeson,et al.  Four commentaries on the use of students and professionals in empirical software engineering experiments , 2018, Empirical Software Engineering.

[7]  Patrick Heymans,et al.  Comparing Goal Modelling Languages: An Experiment , 2007, REFSQ.

[8]  Nelly Bencomo,et al.  Requirements reflection: requirements as runtime entities , 2010, 2010 ACM/IEEE 32nd International Conference on Software Engineering.

[9]  Klaus Krippendorff,et al.  Answering the Call for a Standard Reliability Measure for Coding Data , 2007 .

[10]  Xavier Franch,et al.  iStar 2.0 Language Guide , 2016, ArXiv.

[11]  Michael Felderer,et al.  Manual test case derivation from UML activity diagrams and state machines: A controlled experiment , 2015, Inf. Softw. Technol..

[12]  Claes Wohlin,et al.  Experimentation in software engineering: an introduction , 2000 .

[13]  John Mylopoulos,et al.  Goal-oriented requirements engineering: an extended systematic mapping study , 2017, Requirements Engineering.

[14]  Mike Cohn,et al.  User Stories Applied: For Agile Software Development , 2004 .

[15]  Ivan A. Garcia,et al.  A serious game for teaching the fundamentals of ISO/IEC/IEEE 29148 systems and software engineering - Lifecycle processes - Requirements engineering at undergraduate level , 2020, Comput. Stand. Interfaces.

[16]  FeldererMichael,et al.  Manual test case derivation from UML activity diagrams and state machines , 2015 .

[17]  Eric Knauss,et al.  Safety-Critical Systems and Agile Development: A Mapping Study , 2018, 2018 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA).

[18]  Daniela Cruzes,et al.  Recommended Steps for Thematic Synthesis in Software Engineering , 2011, 2011 International Symposium on Empirical Software Engineering and Measurement.

[19]  Michael Felderer,et al.  Comprehensibility of system models during test design: a controlled experiment comparing UML activity diagrams and state machines , 2018, Software Quality Journal.

[20]  Michael Felderer,et al.  On the Role of Defect Taxonomy Types for Testing Requirements: Results of a Controlled Experiment , 2014, 2014 40th EUROMICRO Conference on Software Engineering and Advanced Applications.

[21]  Ruth Breu,et al.  Is business domain language support beneficial for creating test case specifications: A controlled experiment , 2016, Inf. Softw. Technol..

[22]  Herbert A. Simon,et al.  Why a Diagram is (Sometimes) Worth Ten Thousand Words , 1987 .

[23]  Claes Wohlin,et al.  Experimentation in Software Engineering , 2012, Springer Berlin Heidelberg.

[24]  Natalia Juristo Juzgado,et al.  Are Students Representatives of Professionals in Software Engineering Experiments? , 2015, 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering.

[25]  Burak Turhan,et al.  A Controlled Experiment with Novice Developers on the Impact of Task Description Granularity on Software Quality in Test-Driven Development , 2021, IEEE Transactions on Software Engineering.

[26]  Tsvi Kuflik,et al.  Comparing the comprehensibility of requirements models expressed in Use Case and Tropos: Results from a family of experiments , 2013, Inf. Softw. Technol..

[27]  Kurt Schneider,et al.  Videos vs. Use Cases: Can Videos Capture More Requirements under Time Pressure? , 2010, REFSQ.