DoKnowMe: Towards a Domain Knowledge-driven Methodology for Performance Evaluation

Software engineering considers performance evaluation to be one of the key portions of software quality assurance. Unfortunately, there seems to be a lack of standard methodologies for performance evaluation even in the scope of experimental computer science. Inspired by the concept of "instantiation" in object-oriented programming, we distinguish the generic performance evaluation logic from the distributed and ad-hoc relevant studies, and develop an abstract evaluation methodology (by analogy of "class") we name Domain Knowledge-driven Methodology (DoKnowMe). By replacing five predefined domain-specific knowledge artefacts, DoKnowMe could be instantiated into specific methodologies (by analogy of "object") to guide evaluators in performance evaluation of different software and even computing systems. We also propose a generic validation framework with four indicators (i.e. usefulness, feasibility, effectiveness and repeatability), and use it to validate DoKnowMe in the Cloud services evaluation domain. Given the positive and promising validation result, we plan to integrate more common evaluation strategies to improve DoKnowMe and further focus on the performance evaluation of Cloud autoscaler systems.

[1]  Samuel Kounev,et al.  Modeling variations in load intensity over time , 2014, LT '14.

[2]  Blaine A. Price,et al.  A Principled Taxonomy of Software Visualization , 1993, J. Vis. Lang. Comput..

[3]  Paul J. Fortier,et al.  Computer Systems Performance Evaluation and Prediction , 2003 .

[4]  Dror G. Feitelson,et al.  From Repeatability to Reproducibility and Corroboration , 2015, OPSR.

[5]  William J. Knottenbelt,et al.  Database system performance evaluation models: A survey , 2012, Perform. Evaluation.

[6]  Maria E Fernandez,et al.  How we design feasibility studies. , 2009, American journal of preventive medicine.

[7]  Mohammad S. Obaidat,et al.  Fundamentals of performance evaluation of computer and telecommunication systems , 2010 .

[8]  Liam O'Brien,et al.  A factor framework for experimental design for performance evaluation of commercial cloud services , 2012, 4th IEEE International Conference on Cloud Computing Technology and Science Proceedings.

[9]  Liam O'Brien,et al.  Boosting Metrics for Cloud Services Evaluation -- The Last Mile of Using Benchmark Suites , 2013, 2013 IEEE 27th International Conference on Advanced Information Networking and Applications (AINA).

[10]  Jean-Yves Le Boudec Performance Evaluation of Computer and Communication Systems , 2010, Computer and communication sciences.

[11]  Per Runeson,et al.  Guidelines for conducting and reporting case study research in software engineering , 2009, Empirical Software Engineering.

[12]  D. Feitelson Experimental Computer Science: the Need for a Cultural Change , 2006 .

[13]  Peter J. Denning,et al.  Performance evaluation: Experimental computer science at its best , 1981, SIGMETRICS '81.

[14]  Raj Jain,et al.  The art of computer systems performance analysis - techniques for experimental design, measurement, simulation, and modeling , 1991, Wiley professional computing.

[15]  Liam O'Brien,et al.  On a Catalogue of Metrics for Evaluating Commercial Cloud Services , 2012, 2012 ACM/IEEE 13th International Conference on Grid Computing.

[16]  Peter Checkland,et al.  Soft Systems Methodology in Action , 1990 .

[17]  Heiko Koziolek,et al.  Performance evaluation of component-based software systems: A survey , 2010, Perform. Evaluation.

[18]  Kevin Lee,et al.  How a consumer can measure elasticity for cloud platforms , 2012, ICPE '12.

[19]  Liam O'Brien,et al.  Evaluation of Commercial Cloud Services : A Systematic Literature Review , 2018 .

[20]  Liam O'Brien,et al.  On the Conceptualization of Performance Evaluation of IaaS Services , 2014, IEEE Transactions on Services Computing.

[21]  Margaret J. Robertson,et al.  Design and Analysis of Experiments , 2006, Handbook of statistics.

[22]  Jens Grossklags,et al.  Experimental economics and experimental computer science: a survey , 2007, ExpCS '07.