Reporting experiments to satisfy professionals’ information needs

Although the aim of empirical software engineering is to provide evidence for selecting the appropriate technology, it appears that there is a lack of recognition of this work in industry. Results from empirical research only rarely seem to find their way to company decision makers. If information relevant for software managers is provided in reports on experiments, such reports can be considered as a source of information for them when they are faced with making decisions about the selection of software engineering technologies. To bridge this communication gap between researchers and professionals, we propose characterizing the information needs of software managers in order to show empirical software engineering researchers which information is relevant for decision-making and thus enable them to make this information available. We empirically investigated decision makers’ information needs to identify which information they need to judge the appropriateness and impact of a software technology. We empirically developed a model that characterizes these needs. To ensure that researchers provide relevant information when reporting results from experiments, we extended existing reporting guidelines accordingly. We performed an experiment to evaluate our model with regard to its effectiveness. Software managers who read an experiment report according to the proposed model judged the technology’s appropriateness significantly better than those reading a report about the same experiment that did not explicitly address their information needs. Our research shows that information regarding a technology, the context in which it is supposed to work, and most importantly, the impact of this technology on development costs and schedule as well as on product quality is crucial for decision makers.

[1]  Marcus Ciolkowski,et al.  Relevant Information Sources for Successful Technology Transfer: A Survey Using Inspections as an Example , 2007, ESEM 2007.

[2]  Tony Gorschek,et al.  A method for evaluating rigor and industrial relevance of technology evaluations , 2011, Empirical Software Engineering.

[3]  Aniruddha M. Deshpande,et al.  Standardized Reporting of Clinical Practice Guidelines: A Proposal from the Conference on Guideline Standardization , 2003, Annals of Internal Medicine.

[4]  Sira Vegas Hernandez Characterisation schema for selecting software testing techniques , 2011 .

[5]  Andreas Jedlitschka,et al.  Evaluating a model of software managers' information needs: an experiment , 2010, ESEM '10.

[6]  H. D. Rombach,et al.  THE EXPERIENCE FACTORY , 1999 .

[7]  Tom Pressburger,et al.  The NASA SARP Software Research Infusion Initiative , 2006 .

[8]  Anselm L. Strauss,et al.  Basics of qualitative research : techniques and procedures for developing grounded theory , 1998 .

[9]  J. Gosby MEDIA REVIEWS: Basics of Qualitative Research - Techniques and Procedures for Developing Grounded Theory 2nd Edition by A. Strauss and J. Corbin. Sage Publications, , 2000 .

[10]  Per Runeson,et al.  Investigating Test Teams' Defect Detection in Function test , 2007, ESEM 2007.

[11]  Ruben Prieto-Diaz A software classification scheme (reusability, libraries, development) , 1985 .

[12]  Gordon B. Davis,et al.  User Acceptance of Information Technology: Toward a Unified View , 2003, MIS Q..

[13]  W. Scott Dictionary of sociology , 2005 .

[14]  Natalia Juristo Juzgado,et al.  Packaging experiences for improving testing technique selection , 2006, J. Syst. Softw..

[15]  Janice Singer Using the American Psychological Association (APA) Style Guidelines to Report Experimental Results , 2007 .

[16]  Stefan Biffl,et al.  Software Reviews: The State of the Practice , 2003, IEEE Softw..

[17]  N. Denzin,et al.  Handbook of Qualitative Research , 1994 .

[18]  D. Moher,et al.  The Revised CONSORT Statement for Reporting Randomized Trials: Explanation and Elaboration , 2001, Annals of Internal Medicine.

[19]  D Moher,et al.  The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. , 2001, Annals of internal medicine.

[20]  Victor R. Basili,et al.  Empirical Software Engineering Issues. Critical Assessment and Future Directions, International Workshop, Dagstuhl Castle, Germany, June 26-30, 2006. Revised Papers , 2007, Empirical Software Engineering Issues.

[21]  Dirk Hamann,et al.  Adapting PROFES for Use in an Agile Process: An Industry Experience Report , 2005, PROFES.

[22]  Mary Beth Chrissis,et al.  CMMI for Development: Guidelines for Process Integration and Product Improvement , 2011 .

[23]  Jeffrey C. Carver,et al.  Replicated Studies: Building a Body of Knowledge about Software Reading Techniques , 2003, Lecture Notes on Empirical Software Engineering.

[24]  Shari Lawrence Pfleeger,et al.  Understanding and improving technology transfer in software engineering , 1999, J. Syst. Softw..

[25]  Austen Rainer,et al.  Persuading developers to "buy into" software process improvement: a local opinion and empirical evidence , 2003, 2003 International Symposium on Empirical Software Engineering, 2003. ISESE 2003. Proceedings..

[26]  Andreas Jedlitschka How to Improve the Use of Controlled Experiments as a Means for Early Technology Transfer , 2006, Empirical Software Engineering Issues.

[27]  E. Rogers Diffusion of Innovations , 1962 .

[28]  Kate E Decleene,et al.  Publication Manual of the American Psychological Association , 2011 .

[29]  Peter R. Harris,et al.  Designing and reporting experiments in pyschology , 1986 .

[30]  Neil A. M. Maiden,et al.  ACRE: selecting methods for requirements acquisition , 1996, Softw. Eng. J..

[31]  Mary Shaw,et al.  Writing good software engineering research papers: minitutorial , 2003, ICSE 2003.

[32]  Amela Karahasanovic,et al.  A survey of controlled experiments in software engineering , 2005, IEEE Transactions on Software Engineering.

[33]  Claes Wohlin,et al.  Combining Data from Reading Experiments in Software Inspections: A Feasibility Study , 2003, Lecture Notes on Empirical Software Engineering.

[34]  Marcus Ciolkowski,et al.  Towards evidence in software engineering , 2004, Proceedings. 2004 International Symposium on Empirical Software Engineering, 2004. ISESE '04..

[35]  Dietmar Pfahl,et al.  Requirements for a Tool in Support of SE Technology Selection , 2004 .

[36]  Robert L. Glass,et al.  Matching methodology to problem domain , 2004, CACM.

[37]  Per Runeson,et al.  Prospects and Limitations for Cross-Study Analyses – A Study on an Experiment Series , 2003 .

[38]  Winifred Menezes,et al.  Marketing Technology to Software Practitioners , 2000, IEEE Softw..

[39]  Natalia Juristo Juzgado,et al.  Basics of Software Engineering Experimentation , 2010, Springer US.

[40]  Martin S. Feather,et al.  The NASA software research infusion initiative: successful technology transfer for software assurance , 2006, TT '06.

[41]  Dietmar Pfahl,et al.  Reporting Experiments in Software Engineering , 2008, Guide to Advanced Empirical Software Engineering.

[42]  Barbara Kitchenham,et al.  Procedures for Performing Systematic Reviews , 2004 .

[43]  Claes Wohlin,et al.  Experimentation in Software Engineering , 2012, Springer Berlin Heidelberg.

[44]  Donald J. Reifer Is the Software Engineering State of the Practice Getting Closer to the State of the Art? , 2003, IEEE Softw..

[45]  Lois W. Sayrs Interviews : an introduction to qualitative research interviewing , 1996 .

[46]  Andreas Jedlitschka,et al.  An empirical model of software managers' information needs for software engineering technology selection: a framework to support experimentally-based software engineering technology selection , 2009 .

[47]  Andreas Birk,et al.  A knowledge management infrastructure for systematic improvement in software engineering , 2001 .

[48]  Natalia Juristo Juzgado,et al.  Functional Testing, Structural Testing, and Code Reading: What Fault Type Do They Each Detect? , 2003, ESERNET.

[49]  Christopher M. Lott,et al.  Repeatable software engineering experiments for comparing defect-detection techniques , 2004, Empirical Software Engineering.

[50]  Forrest Shull,et al.  Impact of research on practice in the field of inspections, reviews and walkthroughs: learning from successful industrial uses , 2008, SOEN.

[51]  Barbara A. Kitchenham,et al.  Combining empirical results in software engineering , 1998, Inf. Softw. Technol..

[52]  Marcus Ciolkowski,et al.  Relevant Information Sources for Successful Technology Transfer: A Survey Using Inspections as an Example , 2007, First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007).

[53]  Brian Henderson-Sellers,et al.  The OPEN toolbox of techniques , 1998 .

[54]  E. Guba,et al.  Competing paradigms in qualitative research. , 1994 .

[55]  Claes Wohlin,et al.  Experimentation in software engineering: an introduction , 2000 .

[56]  Scott Henninger,et al.  Accelerating the successful reuse of problem solving knowledge through the domain lifecycle , 1996, Proceedings of Fourth IEEE International Conference on Software Reuse.

[57]  Victor R. Basili,et al.  Support for comprehensive reuse , 1991, Softw. Eng. J..

[58]  Lionel C. Briand,et al.  The Role of Controlled Experiments Working Group Results , 2006, Empirical Software Engineering Issues.

[59]  Peter R. Harris Designing and reporting experiments , 1986 .

[60]  A. Strauss Basics Of Qualitative Research , 1992 .

[61]  Tore Dybå Enabling Software Process Improvement: An Investigation of the Importance of Organizational Issues , 2004, Empirical Software Engineering.

[62]  Per Runeson,et al.  Guidelines for conducting and reporting case study research in software engineering , 2009, Empirical Software Engineering.

[63]  Shari Lawrence Pfleeger,et al.  Preliminary Guidelines for Empirical Research in Software Engineering , 2002, IEEE Trans. Software Eng..

[64]  D. Moher,et al.  The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials , 2001, The Lancet.