Motivations and measurements in an agile case study

With the recent emergence of agile software development technologies, the software community is awaiting sound, empirical investigation of the impacts of agile practices in a live setting. One means of conducting such research is through industrial case studies. There are a number of influencing factors that contribute to the success of such a case study. In this paper, we describe a case study performed at Sabre Airline SolutionsTM evaluating the effects of adopting Extreme Programming (XP) practices with a team that had characteristically plan-driven risk factors. We compare the team's business-related results (productivity and quality) to two published sources of industry averages. Our case study found that the Sabre team yielded above-average post-release quality and average to above-average productivity. We discuss our experience in conducting this case study, including specifics of how data was collected, the rationale behind our process of data collection, and what obstacles were encountered during the case study. We identify four factors that potentially impact the outcome of industrial case studies: availability of data, tool support, cooperative personnel and project status. Recognizing and planning for these factors is essential to conducting industrial case studies.

[1]  Lucas Layman,et al.  Toward a framework for evaluating extreme programming , 2004, ICSE 2004.

[2]  Stephen H. Kan,et al.  Metrics and Models in Software Quality Engineering , 1994, SOEN.

[3]  Shari Lawrence Pfleeger,et al.  Software Metrics : A Rigorous and Practical Approach , 1998 .

[4]  Pekka Abrahamsson,et al.  Extreme programming: first results from a controlled case study , 2003, 2003 Proceedings 29th Euromicro Conference.

[5]  Lucas Layman,et al.  Extreme programming evaluation framework for object-oriented languages -- version 1.1 , 2003 .

[6]  Barry Boehm,et al.  Balancing Agility and Dis-cipline: A Guide for the Perplexed , 2003 .

[7]  Capers Jones,et al.  Software Assessments, Benchmarks, and Best Practices , 2000 .

[8]  Kent L. Beck,et al.  Extreme programming explained - embrace change , 1990 .

[9]  Marvin V. Zelkowitz,et al.  Experimental Models for Validating Technology , 1998, Computer.

[10]  William Krebs,et al.  Turning the Knobs: A Coaching Pattern for XP through Agile Metrics , 2002, XP/Agile Universe.

[11]  Barbara Kitchenham,et al.  Software Metrics: Measurement for Software Process Improvement , 1996 .

[12]  Chris F. Kemerer,et al.  Reliability of function points measurement: a field experiment , 2015, CACM.

[13]  Lucas Layman,et al.  Exploring extreme programming in context: an industrial case study , 2004, Agile Development Conference.

[14]  Marvin V. Zelkowitz,et al.  Culture Conflicts in Software Engineering Technology Transfer , 1998 .

[15]  Frank Maurer,et al.  Extreme Programming: Rapid Development for Web-Based Applications , 2002, IEEE Internet Comput..

[16]  B. Kitchenham,et al.  Case Studies for Method and Tool Evaluation , 1995, IEEE Softw..

[17]  Colin Potts,et al.  Software-engineering research revisited , 1993, IEEE Software.

[18]  Forrest Shull,et al.  Building Knowledge through Families of Experiments , 1999, IEEE Trans. Software Eng..

[19]  Martin Fowler,et al.  Planning Extreme Programming , 2000 .

[20]  Shari Lawrence Pfleeger,et al.  Preliminary Guidelines for Empirical Research in Software Engineering , 2002, IEEE Trans. Software Eng..

[21]  Donald J. Reifer,et al.  How to Get the Most out of Extreme Programming/Agile Methods , 2002, XP/Agile Universe.