Automatically Generating Test Templates from Test Names

Existing specification-based testing techniques require specifications that either do not exist or are too difficult to create. As a result, they often fall short of their goal of helping developers test expected behaviors. In this paper we present a novel, natural language-based approach that exploits the descriptive nature of test names to generate test templates. Similar to how modern IDEs simplify development by providing templates for common constructs such as loops, test templates can save time and lower the cognitive barrier for writing tests. The results of our evaluation show that the approach is feasible: despite the difficulty of the task, when test names contain a sufficient amount of information, the approach’s accuracy is over 80% when parsing the relevant information from the test name and generating the template.

[1]  Yann-Gaël Guéhéneuc,et al.  TIDIER: an identifier splitting approach using speech recognition techniques , 2013, J. Softw. Evol. Process..

[2]  Jeffrey C. Carver,et al.  Part-of-speech tagging of program identifiers for improved text-based software engineering tools , 2013, 2013 21st International Conference on Program Comprehension (ICPC).

[3]  A. Jefferson Offutt,et al.  Generating Tests from UML Specifications , 1999, UML.

[4]  Michael D. Ernst,et al.  Scaling up automated test generation: Automatically generating maintainable regression unit tests for programs , 2011, 2011 26th IEEE/ACM International Conference on Automated Software Engineering (ASE 2011).

[5]  Koushik Sen,et al.  CodeHint: dynamic and interactive synthesis of code snippets , 2014, ICSE.

[6]  Martin Glinz,et al.  A Practical Approach to Validating and Testing Software Systems Using Scenarios , 1999 .

[7]  Bogdan Korel,et al.  Requirement-based automated black-box test generation , 2001, 25th Annual International Computer Software and Applications Conference. COMPSAC 2001.

[8]  Ying Zou,et al.  Spotting working code examples , 2014, ICSE.

[9]  Clémentine Nebut,et al.  Requirements by contracts allow automated system testing , 2003, 14th International Symposium on Software Reliability Engineering, 2003. ISSRE 2003..

[10]  Emily Hill,et al.  Towards automatically generating summary comments for Java methods , 2010, ASE.

[11]  Ruzica Piskac,et al.  Complete completion using types and weights , 2013, PLDI.

[12]  Emily Hill,et al.  An empirical study of identifier splitting techniques , 2014, Empirical Software Engineering.

[13]  Tao Xie,et al.  Parseweb: a programmer assistant for reusing open source code on the web , 2007, ASE.

[14]  Lori L. Pollock,et al.  Automatically mining software-based, semantically-similar words from comment-code mappings , 2013, 2013 10th Working Conference on Mining Software Repositories (MSR).

[15]  Michael D. Ernst,et al.  Feedback-Directed Random Test Generation , 2007, 29th International Conference on Software Engineering (ICSE'07).

[16]  Cristina V. Lopes,et al.  How Well Do Search Engines Support Code Retrieval on the Web? , 2011, TSEM.

[17]  Rob Miller,et al.  Code Completion from Abbreviated Input , 2009, 2009 IEEE/ACM International Conference on Automated Software Engineering.

[18]  Collin McMillan,et al.  Portfolio: finding relevant functions and their usage , 2011, 2011 33rd International Conference on Software Engineering (ICSE).

[19]  Reid Holmes,et al.  Making sense of online code snippets , 2013, 2013 10th Working Conference on Mining Software Repositories (MSR).

[20]  Martin P. Robillard,et al.  Discovering essential code elements in informal documentation , 2013, 2013 35th International Conference on Software Engineering (ICSE).

[21]  Gregory Zacharewicz,et al.  Automatic generation of object-oriented code from DEVS graphical specifications , 2012, Proceedings Title: Proceedings of the 2012 Winter Simulation Conference (WSC).