Automatic exploratory performance testing using a discriminator neural network

We present a novel exploratory performance testing algorithm that uses supervised learning to optimize the test suite generation process. The goal of the proposed approach is to generate test suites that contain a large number of positive tests, revealing performance defects or other issues of interest in the system under test. The key idea is to use a deep neural network to predict which test could be positive and to train this network online during the test generation process, designing and executing the test suite simultaneously. The proposed algorithm assumes that the system under test is stateless and the outcome of the tests is deterministic. Also, only integer and floating point inputs are supported. Otherwise, the approach is completely automatic and it does not require any prior knowledge about the internals of the system under test. It can also be used effectively in a continuous integration setting where small variations of a system are tested successively. We evaluate our algorithm using two example problems: searching for bottlenecks in a web service and searching for efficient hardware configurations in a single-board computer. In both examples, the presented algorithm performed several times better than a random test generator and significantly better compared to our previously published algorithm, producing test suites with a large proportion of positive tests.

[1]  C. Amza,et al.  Specification and implementation of dynamic Web site benchmarks , 2002, 2002 IEEE International Workshop on Workload Characterization.

[2]  Dragos Truscan,et al.  Exploratory Performance Testing Using Reinforcement Learning , 2019, 2019 45th Euromicro Conference on Software Engineering and Advanced Applications (SEAA).

[3]  Robert Feldt,et al.  Finding test data with specific properties via metaheuristic search , 2013, 2013 IEEE 24th International Symposium on Software Reliability Engineering (ISSRE).

[4]  Martin A. Riedmiller,et al.  A direct adaptive method for faster backpropagation learning: the RPROP algorithm , 1993, IEEE International Conference on Neural Networks.

[5]  Simon Holmbacka,et al.  Core Level Utilization for Achieving Energy Efficiency in Heterogeneous Systems , 2017, 2017 25th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP).

[6]  Bernhard K. Aichernig,et al.  Predicting and Testing Latencies with Deep Learning: An IoT Case Study , 2019, TAP@FM.

[7]  Márcio Eduardo Delamaro,et al.  Using Genetic Algorithms in Test Data Generation , 2018, ACM Comput. Surv..

[8]  Qi Luo,et al.  Automating performance bottleneck detection using search-based application profiling , 2015, ISSTA.

[9]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[10]  S. V. Subrahmanya,et al.  Object driven performance testing of Web applications , 2000, Proceedings First Asia-Pacific Conference on Quality Software.

[11]  Elaine J. Weyuker,et al.  Experience with Performance Testing of Software Systems: Issues, an Approach, and Case Study , 2000, IEEE Trans. Software Eng..

[12]  Mengjie Zhang,et al.  Code coverage optimisation in genetic algorithms and particle swarm optimisation for automatic software test data generation , 2015, 2015 IEEE Congress on Evolutionary Computation (CEC).

[13]  Dawn Xiaodong Song,et al.  PerfFuzz: automatically generating pathological inputs , 2018, ISSTA.

[14]  Qi Luo,et al.  FOREPOST: finding performance problems automatically with feedback-directed learning software testing , 2017, Empirical Software Engineering.

[15]  Mark Harman,et al.  Automated web application testing using search based software engineering , 2011, 2011 26th IEEE/ACM International Conference on Automated Software Engineering (ASE 2011).

[16]  Moataz A. Ahmed,et al.  Multiple-path testing for cross site scripting using genetic algorithms , 2016, J. Syst. Archit..