Evaluating capture and replay and model-based performance testing tools: an empirical comparison

[Context] A variety of testing tools have been developed to support and automate software performance testing activities. These tools may use different techniques, such as Model-Based Testing (MBT) or Capture and Replay (CR). [Goal] For software companies, it is important to evaluate such tools w.r.t. the effort required for creating test artifacts using them; despite its importance, there are few empirical studies comparing performance testing tools, specially tools developed with different approaches. [Method] We are conducting experimental studies to provide evidence about the required effort to use CR-based tools and MBT tools. In this paper, we present our first results, evaluating the effort (time spent) when using LoadRunner and Visual Studio CR-based tools, and the PLeTsPerf MBT tool to create performance test scripts and scenarios to test Web applications, in the context of a collaboration project between Software Engineering Research Center at PUCRS and a technological laboratory of a global IT company. [Results] Our results indicate that, for simple testing tasks, the effort of using a CR-based tool was lower than using an MBT tool, but as the testing complexity increases tasks, the advantage of using MBT grows significantly. [Conclusions] To conclude, we discuss the lessons we learned from the design, operation, and analysis of our empirical experiment.

[1]  Dianxiang Xu,et al.  A Systematic Capture and Replay Strategy for Testing Complex GUI Based Java Applications , 2010, 2010 Seventh International Conference on Information Technology: New Generations.

[2]  Avelino Francisco Zorzo,et al.  Generating Performance Test Scripts and Scenarios Based on Abstract Intermediate Models , 2012, SEKE.

[3]  Aruna Raja,et al.  Domain Specific Languages , 2010 .

[4]  Rossano Schifanella,et al.  WALTy: a user behavior tailored tool for evaluating Web application performance , 2004, Third IEEE International Symposium on Network Computing and Applications, 2004. (NCA 2004). Proceedings..

[5]  Natalia Juristo Juzgado,et al.  Replications of software engineering experiments , 2013, Empirical Software Engineering.

[6]  T. Cook,et al.  Quasi-experimentation: Design & analysis issues for field settings , 1979 .

[7]  Claes Wohlin,et al.  Experimentation in software engineering: an introduction , 2000 .

[8]  Avelino Francisco Zorzo,et al.  An empirical comparison of model-based and capture and replay approaches for performance testing , 2014, Empirical Software Engineering.

[9]  Cao Guizhen,et al.  JMeter-based aging simulation of computing system , 2010, 2010 International Conference on Computer, Mechatronics, Control and Electronic Engineering.

[10]  Elder de M. Rodrigues,et al.  Plets: a product line of model-based testing tools , 2013 .

[11]  Jeffrey C. Carver,et al.  A Framework for Software Engineering Experimental Replications , 2008, 13th IEEE International Conference on Engineering of Complex Computer Systems (iceccs 2008).

[12]  Chen Fu,et al.  Experimental assessment of manual versus tool-based maintenance of GUI-directed test scripts , 2009, 2009 IEEE International Conference on Software Maintenance.

[13]  Daniel A. Menascé,et al.  TPC-W: A Benchmark for E-Commerce , 2002, IEEE Internet Comput..

[14]  Avelino Francisco Zorzo,et al.  PLeTs-Test Automation using Software Product Lines and Model Based Testing , 2010, SEKE.

[15]  W. Taha,et al.  Plenary talk III Domain-specific languages , 2008, 2008 International Conference on Computer Engineering & Systems.

[16]  Glenford J. Myers,et al.  Art of Software Testing , 1979 .

[17]  Xin Bai Testing the Performance of an SSAS Cube Using VSTS , 2010, 2010 Seventh International Conference on Information Technology: New Generations.

[18]  M. Roper,et al.  Replication of Software Engineering Experiments , 2000 .

[19]  L. Delbeke Quasi-experimentation - design and analysis issues for field settings - cook,td, campbell,dt , 1980 .