Variations in Performance and Scalability: An Experimental Study in IaaS Clouds Using Multi-Tier Workloads

The increasing popularity of clouds drives researchers to find answers to a large variety of new and challenging questions. Through extensive experimental measurements, we show variance in performance and scalability of clouds for two non-trivial scenarios. In the first scenario, we target the public Infrastructure as a Service (IaaS) clouds, and study the case when a multi-tier application is migrated from a traditional datacenter to one of the three IaaS clouds. To validate our findings in the first scenario, we conduct similar study with three private clouds built using three mainstream hypervisors. We used the RUBBoS benchmark application and compared its performance and scalability when hosted in Amazon EC2, Open Cirrus, and Emulab. Our results show that a best-performing configuration in one cloud can become the worst-performing configuration in another cloud. Subsequently, we identified several system level bottlenecks such as high context switching and network driver processing overheads that degraded the performance. We experimentally evaluate concrete alternative approaches as practical solutions to address these problems. We then built the three private clouds using a commercial hypervisor (CVM), Xen, and KVM respectively and evaluated performance characteristics using both RUBBoS and Cloudstone benchmark applications. The three clouds show significant performance variations; for instance, Xen outperforms CVM by 75 percent on the read-write RUBBoS workload and CVM outperforms Xen by over 10 percent on the Cloudstone workload. These observed problems were confirmed at a finer granularity through micro-benchmark experiments that measure component performance directly.

[1]  Calton Pu,et al.  Improving Virtualized Windows Network Performance by Delegating Network Processing , 2009, 2009 Eighth IEEE International Symposium on Network Computing and Applications.

[2]  A. Fox,et al.  Cloudstone : Multi-Platform , Multi-Language Benchmark and Measurement Tools for Web 2 . 0 , 2008 .

[3]  Calton Pu,et al.  Automated Staging for Built-to-Order Application Systems , 2006, 2006 IEEE/IFIP Network Operations and Management Symposium NOMS 2006.

[4]  T. S. Eugene Ng,et al.  The Impact of Virtualization on Network Performance of Amazon EC2 Data Center , 2010, 2010 Proceedings IEEE INFOCOM.

[5]  Radu Prodan,et al.  Impact of Variable Priced Cloud Resources on Scientific Workflow Scheduling , 2012, Euro-Par.

[6]  D. Spence,et al.  Heterogeneous treatment effects of integrated soil fertility management on crop productivity: Evidence from Nigeria , 2011 .

[7]  Karen Cheng,et al.  Workload Migration into Clouds Challenges, Experiences, Opportunities , 2010, 2010 IEEE 3rd International Conference on Cloud Computing.

[8]  Marin Litoiu,et al.  Performance model driven QoS guarantees and optimization in clouds , 2009, 2009 ICSE Workshop on Software Engineering Challenges of Cloud Computing.

[9]  Srihari Makineni,et al.  Characterization of network processing overheads in Xen , 2006, First International Workshop on Virtualization Technology in Distributed Computing (VTDC 2006).

[10]  Willy Zwaenepoel,et al.  C-JDBC: Flexible Database Clustering Middleware , 2004, USENIX Annual Technical Conference, FREENIX Track.

[11]  Jing Xu,et al.  Layered Bottlenecks and Their Mitigation , 2006, Third International Conference on the Quantitative Evaluation of Systems - (QEST'06).

[12]  Calton Pu,et al.  Variations in Performance and Scalability When Migrating n-Tier Applications to Different Clouds , 2011, 2011 IEEE 4th International Conference on Cloud Computing.

[13]  Andrew J. Hutton,et al.  Virtualization of Linux servers a comparative study , 2008 .

[14]  Alexandru Iosup,et al.  Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing , 2011, IEEE Transactions on Parallel and Distributed Systems.

[15]  Calton Pu,et al.  Experimental evaluation of N-tier systems: Observation and analysis of multi-bottlenecks , 2009, 2009 IEEE International Symposium on Workload Characterization (IISWC).

[16]  Edward Walker,et al.  Benchmarking Amazon EC2 for High-Performance Scientific Computing , 2008, login Usenix Mag..

[17]  Hovav Shacham,et al.  Hey, you, get off of my cloud: exploring information leakage in third-party compute clouds , 2009, CCS.

[18]  Alexandru Iosup,et al.  C-Meter: A Framework for Performance Analysis of Computing Clouds , 2009, 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid.

[19]  Miron Livny,et al.  The cost of doing science on the cloud: The Montage example , 2008, 2008 SC - International Conference for High Performance Computing, Networking, Storage and Analysis.

[20]  Peter Kilpatrick,et al.  Performance models of storage contention in cloud environments , 2013, Software & Systems Modeling.

[21]  Calton Pu,et al.  The Impact of Soft Resource Allocation on n-Tier Application Scalability , 2011, 2011 IEEE International Parallel & Distributed Processing Symposium.

[22]  Muli Ben-Yehuda,et al.  Quantitative Comparison of Xen and KVM , 2008 .

[23]  Calton Pu,et al.  Expertus: A Generator Approach to Automate Performance Testing in IaaS Clouds , 2012, 2012 IEEE Fifth International Conference on Cloud Computing.

[24]  Armin R. Mikler,et al.  NetPIPE: A Network Protocol Independent Performance Evaluator , 1996 .