Ginkgo : Automated , Application-Driven Memory Overcommitment for Cloud Computing

Continuous advances in multicore and I/O technologies have caused memory to become a very valuable sharable resource th at limits the number of virtual machines (VMs) that can be hoste d in a single physical server. While today’s hypervisors impleme nt a wide range of mechanisms to overcommit memory, they lack memory allocation policies and frameworks capable of guaranteein g levels of quality of service to their applications. In this short paper we introduce Ginkgo, a memory overcommit framework that takes an application-aware approach to t he problem.Ginkgo dynamically estimates VM memory requirements for applications without user involvement or application c hanges. Ginkgo regularly monitors application progress and incoming load for each VM, using this data to predict application performa nce under different VM memory sizes. It automates the distribut ion of memory across VMs during runtime to satisfy performance and capacity constraints while optimizing towards one of several possible goals, such as maximizing overall system performance, mini mizing application quality-of-service violations, minimizi ng memory consumption, or maximizing profit for the cloud provider. Using this framework to run the benchmarks DayTrader 2.0 and SPECweb2009, our initial experimental results indicate th at overcommit ratios of at least 2x can be achieved while maintainin g application performance, independently of additional memor y savings that can be enabled by techniques such as page coalescin g.

[1]  Eric Bouillet,et al.  Efficient resource provisioning in compute clouds via VM multiplexing , 2010, ICAC '10.

[2]  Aman Kansal,et al.  Q-clouds: managing performance interference effects for QoS-aware clouds , 2010, EuroSys '10.

[3]  Steven Hand,et al.  Satori: Enlightened Page Sharing , 2009, USENIX Annual Technical Conference.

[4]  Xiaoyun Zhu,et al.  Memory overbooking and dynamic control of Xen virtual machines in consolidated environments , 2009, 2009 IFIP/IEEE International Symposium on Integrated Network Management.

[5]  Ethan L. Miller,et al.  The effectiveness of deduplication on virtual machine disk images , 2009, SYSTOR '09.

[6]  Kang G. Shin,et al.  Automated control of multiple virtualized resources , 2009, EuroSys '09.

[7]  Peter Desnoyers,et al.  Memory buddies: exploiting page sharing for smart colocation in virtualized data centers , 2009, VEE '09.

[8]  Zhenlin Wang,et al.  Dynamic memory balancing for virtual machines , 2009, OPSR.

[9]  George Varghese,et al.  Difference engine: harnessing memory redundancy in virtual machines , 2008, OSDI 2008.

[10]  Irfan Habib,et al.  Virtualization with KVM , 2008 .

[11]  Kai Shen,et al.  Virtual Machine Memory Access Tracing with Hypervisor Exclusive Cache , 2007, USENIX Annual Technical Conference.

[12]  Andrea C. Arpaci-Dusseau,et al.  Geiger: monitoring the buffer cache in a virtual machine environment , 2006, ASPLOS XII.

[13]  Andrea C. Arpaci-Dusseau,et al.  Antfarm: Tracking Processes in a Virtual Machine Environment , 2006, USENIX Annual Technical Conference, General Track.

[14]  HarrisTim,et al.  Xen and the art of virtualization , 2003 .

[15]  Carl A. Waldspurger,et al.  Memory resource management in VMware ESX server , 2002, OSDI '02.