Experimenting with Application-Based Benchmarks on Different Cloud Providers via a Multi-cloud Execution and Modeling Framework

Cloud services are emerging today as an innovative IT provisioning model, offering benefits over the traditional approach of provisioning infrastructure. However, the occurrence of multi-tenancy, virtualization and resource sharing issues raise certain difficulties in providing performance estimation during application design or deployment time. In order to assess the performance of cloud services and compare cloud offerings, cloud benchmarks are required. The aim of this paper is to present a mechanism and a benchmarking process for measuring the performance of various cloud service delivery models, while describing this information in a machine understandable format. The suggested framework is responsible for organizing the execution and may support multiple cloud providers. In our work context, benchmarking measurement results are demonstrated from three large commercial cloud providers, Amazon EC2, Microsoft Azure and Flexiant in order to assist with provisioning decisions for cloud users. Furthermore, we present approaches for measuring service performance with the usage of specialized metrics for ranking the services according to a weighted combination of cost, performance and workload.

[1]  Guilherme Piegas Koslovski,et al.  VXDL: Virtual Resources and Interconnection Networks Description Language , 2008, GridNets.

[2]  Xiaowei Yang,et al.  CloudCmp: comparing public cloud providers , 2010, IMC '10.

[3]  Dimosthenis Kyriazis,et al.  Legacy applications on the cloud: Challenges and enablers focusing on application performance analysis and providers characteristics , 2012, 2012 IEEE 2nd International Conference on Cloud Computing and Intelligence Systems.

[4]  Dana Petcu,et al.  MODAClouds: A model-driven approach for the design and execution of applications on multiple Clouds , 2012, 2012 4th International Workshop on Modeling in Software Engineering (MISE).

[5]  Alexandru Iosup,et al.  IaaS cloud benchmarking: approaches, challenges, and experience , 2013, HotTopiCS '13.

[6]  Anja Feldmann,et al.  A Resource Description Language with Vagueness Support for Multi-Provider Cloud Networks , 2012, 2012 21st International Conference on Computer Communications and Networks (ICCCN).

[7]  Alexandru Iosup,et al.  Cloud Usage Patterns: A Formalism for Description of Cloud Usage Scenarios , 2013, ArXiv.

[8]  Calton Pu,et al.  An Analysis of Performance Interference Effects in Virtual Environments , 2007, 2007 IEEE International Symposium on Performance Analysis of Systems & Software.

[9]  Alexandru Iosup,et al.  Benchmarking in the Cloud: What It Should, Can, and Cannot Be , 2012, TPCTC.

[10]  Tommaso Cucinotta,et al.  The effects of scheduling, workload type and consolidation scenarios on virtual machine performance and their prediction through optimized artificial neural networks , 2011, J. Syst. Softw..

[11]  Rajkumar Buyya,et al.  A framework for ranking of cloud computing services , 2013, Future Gener. Comput. Syst..

[12]  Alexander Pretschner,et al.  Challenges and Opportunities of Cloud Computing : Trade-off Decisions in Cloud Computing Architecture , 2010 .

[13]  J. Mirkovic,et al.  DADL : Distributed Application Description Language , 2010 .

[14]  Antti Ylä-Jääski,et al.  Exploiting Hardware Heterogeneity within the Same Instance Type of Amazon EC2 , 2012, HotCloud.