Unit testing performance with Stochastic Performance Logic

Unit testing is an attractive quality management tool in the software development process, however, practical obstacles make it difficult to use unit tests for performance testing. We present Stochastic Performance Logic, a formalism for expressing performance requirements, together with interpretations that facilitate performance evaluation in the unit test context. The formalism and the interpretations are implemented in a performance testing framework and evaluated in multiple experiments, demonstrating the ability to identify performance differences in realistic unit test scenarios.

[1]  Nora Koch,et al.  Software Engineering for Collective Autonomic Systems , 2015, Lecture Notes in Computer Science.

[2]  Richard P. Martin,et al.  Model-Based Validation for Internet Services , 2009, 2009 28th IEEE International Symposium on Reliable Distributed Systems.

[3]  Matthias Hauswirth,et al.  Producing wrong data without doing anything obviously wrong! , 2009, ASPLOS.

[4]  Petr Tuma,et al.  Capturing performance assumptions using stochastic performance logic , 2012, ICPE '12.

[5]  Welch Bl THE GENERALIZATION OF ‘STUDENT'S’ PROBLEM WHEN SEVERAL DIFFERENT POPULATION VARLANCES ARE INVOLVED , 1947 .

[6]  Sharon E. Perl Performance assertion checking , 1993, SOSP '93.

[7]  Petr Tuma,et al.  Automated detection of performance regressions: the mono experience , 2005, 13th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems.

[8]  Douglas C. Schmidt,et al.  Skoll: A Process and Infrastructure for Distributed Continuous Quality Assurance , 2007, IEEE Transactions on Software Engineering.

[9]  Petr Tuma,et al.  Benchmark Precision and Random Initial State , 2005 .

[10]  Kent Beck,et al.  Kent Beck's Guide to Better Smalltalk: SIMPLE SMALLTALK TESTING , 1997 .

[11]  Roozbeh Farahbod,et al.  Automated root cause isolation of performance regressions during software development , 2013, ICPE '13.

[12]  David J. Sheskin,et al.  Handbook of Parametric and Nonparametric Statistical Procedures , 1997 .

[13]  Tianshi Chen,et al.  Statistical Performance Comparisons of Computers , 2012, IEEE Transactions on Computers.

[14]  Antonín Steinhauser,et al.  DOs and DON'Ts of Conducting Performance Measurements in Java , 2015, ICPE.

[15]  Xuezheng Liu,et al.  D3S: Debugging Deployed Distributed Systems , 2008, NSDI.

[16]  Jeffrey S. Vetter,et al.  Asserting Performance Expectations , 2002, ACM/IEEE SC 2002 Conference (SC'02).

[17]  Tianshi Chen,et al.  Statistical performance comparisons of computers , 2012, IEEE International Symposium on High-Performance Comp Architecture.

[18]  Miao Wang,et al.  Profile-Based, Load-Independent Anomaly Detection and Analysis in Performance Regression Testing of Software Systems , 2013, 2013 17th European Conference on Software Maintenance and Reengineering.

[19]  Amin Vahdat,et al.  Pip: Detecting the Unexpected in Distributed Systems , 2006, NSDI.

[20]  Richard E. Fairley,et al.  Guide to the Software Engineering Body of Knowledge (SWEBOK(R)): Version 3.0 , 2014 .

[21]  Kent L. Beck,et al.  Test-driven Development - by example , 2002, The Addison-Wesley signature series.

[22]  Ugo Montanari,et al.  Software Engineering for Collective Autonomic Systems: The ASCENS Approach , 2015 .

[23]  Mira Mezini,et al.  Da capo con scala: design and analysis of a scala benchmark suite for the java virtual machine , 2011, OOPSLA '11.

[24]  Ying Zou,et al.  Mining Performance Regression Testing Repositories for Automated Performance Analysis , 2010, 2010 10th International Conference on Quality Software.

[25]  Thomas Reidemeister,et al.  DataMill: rigorous performance evaluation made easy , 2013, ICPE '13.

[26]  Petr Tuma,et al.  Supporting Performance Awareness in Autonomous Ensembles , 2015, The ASCENS Approach.

[27]  Petr Tuma,et al.  Performance Regression Unit Testing: A Case Study , 2013, EPEW.