Experimental context classification: incentives and experience of subjects

There is a need to identify factors that affect the result of empirical studies in software engineering research. It is still the case that seemingly identical replications of controlled experiments result in different conclusions due to the fact that all factors describing the experiment context are not clearly defined and hence controlled. In this article, a scheme for describing the participants of controlled experiments is proposed and evaluated. It consists of two main factors, the incentives for participants in the experiment and the experience of the participants. The scheme has been evaluated by classifying a set of previously conducted experiments from literature. It can be concluded that the scheme was easy to use and understand. It is also found that experiments that are classified in the same way to a large extent point at the same results, which indicates that the scheme addresses relevant factors.

[1]  Patrik Berander,et al.  Using students as subjects in requirements prioritization , 2004, Proceedings. 2004 International Symposium on Empirical Software Engineering, 2004. ISESE '04..

[2]  Tore Dybå,et al.  Evidence-based software engineering , 2004, Proceedings. 26th International Conference on Software Engineering.

[3]  Walter F. Tichy,et al.  Hints for Reviewing Empirical Work in Software Engineering , 2000, Empirical Software Engineering.

[4]  Per Runeson,et al.  Are the Perspectives Really Different? – Further Experimentation on Scenario-Based Reading of Requirements , 2000, Empirical Software Engineering.

[5]  Claes Wohlin,et al.  Using Students as Subjects—A Comparative Study of Students and Professionals in Lead-Time Impact Assessment , 2000, Empirical Software Engineering.

[6]  Khaled El Emam,et al.  Benchmarking Kappa: Interrater Agreement in Software Process Assessments , 1999, Empirical Software Engineering.

[7]  Ben Shneiderman,et al.  Perspective-based Usability Inspection: An Empirical Validation of Efficacy , 1999, Empirical Software Engineering.

[8]  Forrest Shull,et al.  The empirical investigation of Perspective-Based Reading , 1995, Empirical Software Engineering.

[9]  Jeffrey C. Carver,et al.  Replicating software engineering experiments: addressing the tacit knowledge problem , 2002, Proceedings International Symposium on Empirical Software Engineering.

[10]  Shinji Kusumoto,et al.  An experimental comparison of checklist-based reading and perspective-based reading for UML design document inspection , 2002, Proceedings International Symposium on Empirical Software Engineering.

[11]  Tore Dybå,et al.  Conducting realistic experiments in software engineering , 2002, Proceedings International Symposium on Empirical Software Engineering.

[12]  Shari Lawrence Pfleeger,et al.  Preliminary Guidelines for Empirical Research in Software Engineering , 2002, IEEE Trans. Software Eng..

[13]  Natalia Juristo Juzgado,et al.  Basics of Software Engineering Experimentation , 2010, Springer US.

[14]  Stefan Biffl,et al.  Using reading techniques to focus inspection performance , 2001, Proceedings 27th EUROMICRO Conference. 2001: A Net Odyssey.

[15]  Khaled El Emam,et al.  An Internally Replicated Quasi-Experimental Comparison of Checklist and Perspective-Based Reading of Code Documents , 2001, IEEE Trans. Software Eng..

[16]  Colin Atkinson,et al.  An experimental comparison of reading techniques for defect detection in UML design documents , 2000, J. Syst. Softw..

[17]  Claes Wohlin,et al.  Experimentation in software engineering: an introduction , 2000 .

[18]  Giuseppe Visaggio,et al.  Evaluating Defect Detection Techniques for Software Requirements Inspections , 2000 .

[19]  James Miller Can results from software engineering experiments be safely combined? , 1999, Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403).

[20]  Will Hayes,et al.  Research synthesis in software engineering: a case for meta-analysis , 1999, Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403).

[21]  Forrest Shull,et al.  Building Knowledge through Families of Experiments , 1999, IEEE Trans. Software Eng..

[22]  Walter F. Tichy,et al.  Should Computer Scientists Experiment More? , 1998, Computer.

[23]  Forrest Shull,et al.  Developing techniques for using software documents: a series of empirical studies , 1998 .

[24]  Oliver Laitenberger,et al.  Perspective-based reading of code documents at Robert Bosch GmbH , 1997, Inf. Softw. Technol..

[25]  Sivert Sørumgård,et al.  Verification of Process Conformance in Empirical Studies of Software Development , 1997 .

[26]  Marvin V. Zelkowitz,et al.  Experimental validation in software engineering , 1997, Inf. Softw. Technol..

[27]  Paul Lukowicz,et al.  Experimental evaluation in computer science: A quantitative study , 1995, J. Syst. Softw..

[28]  S. Siegel,et al.  Nonparametric Statistics for the Behavioral Sciences , 2022, The SAGE Encyclopedia of Research Design.