About Automatic Benchmarking of IaaS Cloud Service Providers for a World of Container Clusters

Cloud service selection can be a complex and challenging task for a cloud engineer. Most current approaches try to identify a best cloud service provider by evaluating several relevant criteria like prices, processing, memory, disk, network performance, quality of service and so on. Nevertheless, the decision making problem involves so many variables, that it is hard to model it appropriately. We present an approach that is not about selecting a best cloud service provider. It is about selecting most similar resources provided by different cloud service providers. This 󰅮its much better practical needs of cloud service engineers. Especially, if container clusters are involved. EasyCompare, an automated benchmarking tool suite to compare cloud service providers, is able to benchmark and compare virtual machine types of different cloud service providers using an Euclidian distance measure. It turned out, that only 1% of theoretical possible machine pairs have to be considered in practice. These relevant machine types can be identi󰅮ied by systematic benchmark runs in less than three hours. We present some expectable but also astonishing evaluation results of EasyCompare used to evaluate two major and representative public cloud service providers: Amazon Web Services and Google Compute Engine.

[1]  Roy Rada,et al.  Development and application of a metric on semantic nets , 1989, IEEE Trans. Syst. Man Cybern..

[2]  John L. Henning SPEC CPU2006 benchmark descriptions , 2006, CARN.

[3]  Rajiv Ranjan,et al.  CloudGenius: decision support for web server cloud migration , 2012, WWW.

[4]  Brad Fitzpatrick,et al.  Distributed caching with memcached , 2004 .

[5]  Rajkumar Buyya,et al.  2011 Fourth IEEE International Conference on Utility and Cloud Computing SMICloud: A Framework for Comparing and Ranking Cloud Services , 2022 .

[6]  Jane Siegel,et al.  Cloud Services Measures for Global Use: The Service Measurement Index (SMI) , 2012, 2012 Annual SRII Global Conference.

[7]  Sam Newman,et al.  Building microservices - designing fine-grained systems, 1st Edition , 2015 .

[8]  Mei-Yu Wu,et al.  An Optimal Selection Approach for a Multi-tenancy Service Based on a SLA Utility , 2012, 2012 International Symposium on Computer, Consumer and Control.

[9]  Jie Huang,et al.  The HiBench benchmark suite: Characterization of the MapReduce-based data analysis , 2010, 2010 IEEE 26th International Conference on Data Engineering Workshops (ICDEW 2010).

[10]  Yanping Wang,et al.  SPECjvm2008 Performance Characterization , 2009, SPEC Benchmark Workshop.

[11]  Xiaowei Yang,et al.  CloudCmp: comparing public cloud providers , 2010, IMC '10.

[12]  Jun Yang,et al.  An adaptive service selection method for cross‐cloud service composition , 2013, Concurr. Comput. Pract. Exp..

[13]  Matthias S. Müller,et al.  SPEC MPI2007—an application benchmark suite for parallel systems using MPI , 2010, ISC 2010.

[14]  L. R. Rasmussen,et al.  In information retrieval: data structures and algorithms , 1992 .

[15]  Randy H. Katz,et al.  Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center , 2011, NSDI.

[16]  Babak Falsafi,et al.  Clearing the clouds: a study of emerging scale-out workloads on modern hardware , 2012, ASPLOS XVII.

[17]  Nane Kratzke Lightweight Virtualization Cluster How to Overcome Cloud Vendor Lock-In , 2014 .

[18]  Vehbi C. Gungor,et al.  Performance evaluation of cloud computing platforms using statistical methods , 2014, Comput. Electr. Eng..

[19]  Shengli Wu,et al.  Using the Euclidean Distance for Retrieval Evaluation , 2011, BNCOD.

[20]  Alireza Afshari,et al.  Simple Additive Weighting approach to Personnel Selection problem , 2010 .

[21]  Yi Peng,et al.  The analytic hierarchy process: task scheduling and resource allocation in cloud computing environment , 2011, The Journal of Supercomputing.

[22]  R Core Team,et al.  R: A language and environment for statistical computing. , 2014 .

[23]  Vyas Sekar,et al.  Towards verifiable resource accounting for outsourced computation , 2013, VEE '13.

[24]  Hanspeter Schmid,et al.  The 3v Fallacy: Measuring a Small Number of Samples , 2014, IEEE Microwave Magazine.

[25]  Jinquan Dai,et al.  Experience from Hadoop Benchmarking with HiBench: From Micro-Benchmarks Toward End-to-End Pipelines , 2013, WBDB.

[26]  Michael McGill,et al.  An Evaluation of Factors Affecting Document Ranking by Information Retrieval Systems. , 1979 .

[27]  Elizabeth Chang,et al.  Cloud service selection: State-of-the-art and future research directions , 2014, J. Netw. Comput. Appl..

[28]  Nikolai Joukov,et al.  A nine year study of file system and storage benchmarking , 2008, TOS.

[29]  C. Vazquez,et al.  Cloud Computing Benchmarking: A Survey , 2014 .

[30]  Ali Miri,et al.  An End-to-End QoS Mapping Approach for Cloud Service Selection , 2013, 2013 IEEE Ninth World Congress on Services.

[31]  Sanjay Ghemawat,et al.  MapReduce: Simplified Data Processing on Large Clusters , 2004, OSDI.

[32]  Christian Esposito,et al.  Smart Cloud Storage Service Selection Based on Fuzzy Logic, Theory of Evidence and Game Theory , 2016, IEEE Transactions on Computers.

[33]  Nane Kratzke A Lightweight Virtualization Cluster Reference Architecture Derived from Open Source PaaS Platforms , 2014, CloudCom 2014.

[34]  Daisuke Takahashi,et al.  The HPC Challenge (HPCC) benchmark suite , 2006, SC.

[35]  Chia-Wei Chang,et al.  Probability-Based Cloud Storage Providers Selection Algorithms with Maximum Availability , 2012, 2012 41st International Conference on Parallel Processing.

[36]  A. Fox,et al.  Cloudstone : Multi-Platform , Multi-Language Benchmark and Measurement Tools for Web 2 . 0 , 2008 .

[37]  Adam Silberstein,et al.  Benchmarking cloud serving systems with YCSB , 2010, SoCC '10.