Context: Evidence Based Software Engineering (EBSE) has recently been proposed as a methodology to help practitioners improve their technology adoption decisions given their particular circumstances. Formally, Systematic Literature Reviews (SLRs) are a part of EBSE. There has been a noticeable take up of SLRs by researchers, but little has been published on whether, and then how, the EBSE methodology has been applied in full.
Objectives: To empirically evaluate the use of EBSE by undergraduate students. To consider how insights into the students' performance with EBSE can be applied to software practitioners' expected performance with EBSE. To gain insights into the use of Supplementary EBSE Guidelines and associated Assessment Schemes.
Method: 37 final-year undergraduate students completed a coursework assessment that required them to use EBSE to evaluate one or more of four Requirements Management Tools (RMTs): Telelogic DOORs®, Borland's Caliber® Analyst, Compuware Optimal Trace™, and GODA ARTS. Students were provided with a range of EBSE resources, including a set of Supplementary EBSE Guidelines, to assist them with their EBSE evaluation. An Assessment Scheme was development to assess the degree to which students followed the Supplementary Guidelines. A feedback questionnaire, completed by 12 of the 37 students, complemented the Assessment Scheme.
Results: 78% of students chose to evaluate a RMT that is currently a market leader. 62% of students subsequently recommended the adoption of their chosen RTM. Some students made a recommendation because the Guidelines indicated they should, rather than making a recommendation because the evidence 'entailed' that recommendation. Only 8% of students intentionally made no recommendation, and this seems to be on the basis of the poor quality of evidence available on the chosen RMT(s). All 12 students who completed the feedback questionnaire reported that this was the hardest or nearly the hardest assessment that the student had ever done! 67% of these 12 students reported that they had not given themselves sufficient time to complete the evaluation, and 83% reported that they had to balance their time on this evaluation with other commitments. The12 students found EBSE steps 1 and 4 to be the easiest and EBSE steps 2 and 3 to be the hardest, and they generally reported that they had received sufficient support from the EBSE resources made available to them.
Conclusion: This study presents independent, empirical evidence of the full use of EBSE, to complement the growing body of published SLRs. We believe that the findings on students' use of EBSE are relevant to understanding professional practitioners' prospective use of EBSE. Both students and professionals would: find EBSE very challenging, in particular the SLRs; be subject to constraints and trade-offs; may not be able to find relevant and rigorous evidence; and may make errors in their critical thinking whilst conducting the evaluation. Our findings should benefit researchers, practitioners and educators.
[1]
Austen Rainer,et al.
A preliminary empirical investigation of the use of evidence based software engineering by under-graduate students
,
2006
.
[2]
Kenneth N. Meyer,et al.
Evaluating COTS component dependability in context
,
2005,
IEEE Software.
[3]
Magne Jørgensen,et al.
How large are software cost overruns? A review of the 1994 CHAOS report
,
2006,
Inf. Softw. Technol..
[4]
Paul Lukowicz,et al.
Experimental evaluation in computer science: A quantitative study
,
1995,
J. Syst. Softw..
[5]
Robert L. Glass,et al.
The software-research crisis
,
1994,
IEEE Software.
[6]
Barbara Kitchenham,et al.
DESMET: a methodology for evaluating software engineering methods and tools
,
1997
.
[7]
Robert L. Glass.
IT Failure Rates--70% or 10-15%?
,
2005,
IEEE Softw..
[8]
Tore Dybå,et al.
Evidence-Based Software Engineering for Practitioners
,
2005,
IEEE Softw..
[9]
Tore Dybå,et al.
Teaching evidence-based software engineering to university students
,
2005,
11th IEEE International Software Metrics Symposium (METRICS'05).
[10]
Marvin V. Zelkowitz,et al.
Experimental validation in software engineering
,
1997,
Inf. Softw. Technol..
[11]
Pearl Brereton,et al.
Investigating the applicability of the evidence-based paradigm to software engineering
,
2006,
WISER '06.
[12]
Mark C. Paulk,et al.
Capability Maturity Model
,
1991
.
[13]
Walter F. Tichy,et al.
Status of Empirical Research in Software Engineering
,
2006,
Empirical Software Engineering Issues.
[14]
Ravishankar K. Iyer,et al.
Experimental evaluation
,
1995
.
[15]
Robert L. Glass,et al.
A structure-based critique of contemporary computing research
,
1995,
J. Syst. Softw..
[16]
Barbara Kitchenham,et al.
Procedures for Performing Systematic Reviews
,
2004
.
[17]
Marvin V. Zelkowitz,et al.
Experimental Validation of New Software Technology
,
2003,
Lecture Notes on Empirical Software Engineering.
[18]
Tore Dybå,et al.
Evidence-based software engineering
,
2004,
Proceedings. 26th International Conference on Software Engineering.
[19]
Pearl Brereton,et al.
Preliminary results of a study of the completeness and clarity of structured abstracts
,
2007,
EASE.
[20]
Norman E. Fenton,et al.
Measurement : A Necessary Scientific Basis
,
2004
.