Capture-replay vs. programmable web testing: An empirical assessment during test case evolution

There are several approaches for automated functional web testing and the choice among them depends on a number of factors, including the tools used for web testing and the costs associated with their adoption. In this paper, we present an empirical cost/benefit analysis of two different categories of automated functional web testing approaches: (1) capture-replay web testing (in particular, using Selenium IDE); and, (2) programmable web testing (using Selenium WebDriver). On a set of six web applications, we evaluated the costs of applying these testing approaches both when developing the initial test suites from scratch and when the test suites are maintained, upon the release of a new software version. Results indicate that, on the one hand, the development of the test suites is more expensive in terms of time required (between 32% and 112%) when the programmable web testing approach is adopted, but on the other hand, test suite maintenance is less expensive when this approach is used (with a saving between 16% and 51%). We found that, in the majority of the cases, after a small number of releases (from one to three), the cumulative cost of programmable web testing becomes lower than the cost involved with capture-replay web testing and the cost saving gets amplified over the successive releases.

[1]  Paolo Tonella,et al.  Detecting anomaly and failure in Web applications , 2006, IEEE Multimedia.

[2]  Giuseppe A. Di Lucca,et al.  Testing Web applications , 2002, International Conference on Software Maintenance, 2002. Proceedings..

[3]  Mehdi MirzaAghaei,et al.  Automatic test suite evolution , 2011, ESEC/FSE '11.

[4]  Paolo Tonella,et al.  Analysis and testing of Web applications , 2001, Proceedings of the 23rd International Conference on Software Engineering. ICSE 2001.

[5]  Claes Wohlin,et al.  Experimentation in software engineering: an introduction , 2000 .

[6]  Chen Fu,et al.  Maintaining and evolving GUI-directed test scripts , 2009, 2009 IEEE 31st International Conference on Software Engineering.

[7]  Filippo Ricca,et al.  Improving Test Suites Maintainability with the Page Object Pattern: An Industrial Case Study , 2013, 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation Workshops.

[8]  Arie van Deursen,et al.  Invariant-based automatic testing of AJAX user interfaces , 2009, 2009 IEEE 31st International Conference on Software Engineering.

[9]  Alessandro Orso,et al.  WATER: Web Application TEst Repair , 2011, ETSE '11.

[10]  Mauro Pezzè,et al.  Automatically repairing test cases for evolving method declarations , 2010, 2010 IEEE International Conference on Software Maintenance.

[11]  Gregg Rothermel,et al.  Leveraging user-session data to support Web application testing , 2005, IEEE Transactions on Software Engineering.

[12]  Atif M. Memon,et al.  Automatically repairing event sequence-based GUI test suites for regression testing , 2008, TSEM.

[13]  Stefan Berner,et al.  Observations and lessons learned from automated testing , 2005, Proceedings. 27th International Conference on Software Engineering, 2005. ICSE 2005..

[14]  Chen Fu,et al.  REST: A tool for reducing effort in script-based testing , 2008, 2008 IEEE International Conference on Software Maintenance.

[15]  Per Runeson,et al.  A case study on regression test suite maintenance in system evolution , 2004, 20th IEEE International Conference on Software Maintenance, 2004. Proceedings..

[16]  Vicente Ferreira de Lucena,et al.  Software Test Automation practices in agile development environment: An industry experience report , 2012, 2012 7th International Workshop on Automation of Software Test (AST).