A DSL-Based Framework for Performance Assessment

Performance assessment is an essential verification practice in both research and industry for software quality assurance. Experiment setups for performance assessment tend to be complex. A typical experiment needs to be run for a variety of involved hardware, software versions, system settings and input parameters. Typical approaches for performance assessment are based on scripts. They do not document all variants explicitly, which makes it hard to analyze and reproduce experiment results correctly. In general they tend to be monolithic which makes it hard to extend experiment setups systematically and to reuse features such as result storage and analysis consistently across experiments. In this paper, we present a generic approach and a DSL-based framework for performance assessment. The DSL helps the user to set and organize the variants in an experiment setup explicitly. The Runtime module in our framework executes experiments after which results are stored together with the corresponding setups in a database. Database queries provide easy access to the results of previous experiments and the correct analysis of experiment results in context of the experiment setup. Furthermore, we describe operations for common problems in performance assessment such as outlier detection. At Oracle, we successfully instantiate the framework and use it to nightly assess the performance of PGX [6, 12], a toolkit for parallel graph analytics.

[1]  John K. Ousterhout,et al.  Scripting: Higher-Level Programming for the 21st Century , 1998, Computer.

[2]  Sungpack Hong,et al.  PGX.D: a fast distributed graph processing engine , 2015, SC15: International Conference for High Performance Computing, Networking, Storage and Analysis.

[3]  Douglas C. Schmidt,et al.  Guest Editor's Introduction: Model-Driven Engineering , 2006, Computer.

[4]  Sebastian Wrede,et al.  Model-Based Performance Testing for Robotics Software Components , 2018, 2018 Second IEEE International Conference on Robotic Computing (IRC).

[5]  Eelco Visser,et al.  Integrated language definition testing: enabling test-driven language development , 2011, OOPSLA '11.

[6]  Kunle Olukotun,et al.  Green-Marl: a DSL for easy and efficient graph analysis , 2012, ASPLOS XVII.

[7]  Eelco Visser,et al.  The spoofax language workbench: rules for declarative specification of languages and IDEs , 2010, OOPSLA.

[8]  Zhe Wu,et al.  PGX.ISO: Parallel and Efficient In-Memory Engine for Subgraph Isomorphism , 2014, GRADES.

[9]  Cesare Pautasso,et al.  Towards Holistic Continuous Software Performance Assessment , 2017, ICPE Companion.

[10]  Aniruddha S. Gokhale,et al.  UPSARA: A Model-Driven Approach for Performance Analysis of Cloud-Hosted Applications , 2018, 2018 IEEE/ACM 11th International Conference on Utility and Cloud Computing (UCC).

[11]  Sungpack Hong,et al.  PGQL: a property graph query language , 2016, GRADES '16.

[12]  Avelino Francisco Zorzo,et al.  Canopus: A Domain-Specific Language for Modeling Performance Testing , 2016, 2016 IEEE International Conference on Software Testing, Verification and Validation (ICST).

[13]  Stuart Kent,et al.  Model Driven Engineering , 2002, IFM.

[14]  Tor-Morten Grønli,et al.  Meeting Quality Standards for Mobile Application Development in Businesses: A Framework for Cross-Platform Testing , 2016, 2016 49th Hawaii International Conference on System Sciences (HICSS).

[15]  Walter Binder,et al.  SOABench: performance evaluation of service-oriented middleware made easy , 2010, 2010 ACM/IEEE 32nd International Conference on Software Engineering.

[16]  Arie van Deursen,et al.  Domain-specific languages: an annotated bibliography , 2000, SIGP.

[17]  Robert C. Martin Clean Code - a Handbook of Agile Software Craftsmanship , 2008 .

[18]  Volker Markl,et al.  Tractor pulling on data warehouses , 2011, DBTest '11.