CloudBench: Experiment Automation for Cloud Environments

The growth in the adoption of cloud computing is driven by distinct and clear benefits for both cloud customers and cloud providers. However, the increase in the number of cloud providers as well as in the variety of offerings from each provider has made it harder for customers to choose. At the same time, the number of options to build a cloud infrastructure, from cloud management platforms to different interconnection and storage technologies, also poses a challenge for cloud providers. In this context, cloud experiments are as necessary as they are labor intensive. Cloud Bench [1] is an open-source framework that automates cloud-scale evaluation and benchmarking through the running of controlled experiments, where complex applications are automatically deployed. Experiments are described through experiment plans, containing directives with enough descriptive power to make the experiment descriptions brief while allowing for customizable multi-parameter variation. Experiments can be executed in multiple clouds using a single interface. Cloud Bench is capable of managing experiments spread across multiple regions and for long periods of time. The modular approach adopted allows it to be easily extended to accommodate new cloud infrastructure APIs and benchmark applications, directly by external users. A built-in data collection system collects, aggregates and stores metrics for cloud management activities (such as VM provisioning and VM image capture) and application runtime information. Experiments can be conducted in a highly controllable fashion, in order to assess the stability, scalability and reliability of multiple cloud configurations. We demonstrate Cloud Bench's main characteristics through the evaluation of an Open Stack installation, including experiments with approximately 1200 simultaneous VMs at an arrival rate of up to 400 VMs/hour.

[1]  Alexandru Iosup,et al.  C-Meter: A Framework for Performance Analysis of Computing Clouds , 2009, 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid.

[2]  Muli Ben-Yehuda,et al.  Applications Know Best: Performance-Driven Memory Overcommit with Ginkgo , 2011, 2011 IEEE Third International Conference on Cloud Computing Technology and Science.

[3]  Mohamed Abu Rizkaa,et al.  CloudGauge: A Dynamic Cloud and Virtualization Benchmarking Suite , 2010, 2010 19th IEEE International Workshops on Enabling Technologies: Infrastructures for Collaborative Enterprises.

[4]  André Brinkmann,et al.  Non-intrusive virtualization management using libvirt , 2010, 2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010).

[5]  Babak Falsafi,et al.  Clearing the Clouds: A Study of Emerging Workloads on Modern Hardware , 2011 .

[6]  L. Youseff,et al.  Toward a Unified Ontology of Cloud Computing , 2008, 2008 Grid Computing Environments Workshop.

[7]  Xiaowei Yang,et al.  CloudCmp: comparing public cloud providers , 2010, IMC '10.

[8]  Xiaowei Yang,et al.  Comparing Public-Cloud Providers , 2011, IEEE Internet Computing.

[9]  Mark Lamourine,et al.  OpenStack , 2014, login Usenix Mag..

[10]  Bin Li,et al.  Fair Benchmarking for Cloud Computing systems , 2012, Journal of Cloud Computing: Advances, Systems and Applications.

[11]  Alexandru Iosup,et al.  A Performance Analysis of EC2 Cloud Computing Services for Scientific Computing , 2009, CloudComp.

[12]  Adam Silberstein,et al.  Benchmarking cloud serving systems with YCSB , 2010, SoCC '10.

[13]  Randy H. Katz,et al.  Above the Clouds: A Berkeley View of Cloud Computing , 2009 .

[14]  Alekh Jindal,et al.  Hadoop++ , 2010 .