Comparative Analysis of Page Cache Provisioning in Virtualized Environments

Efficient management of system memory plays a critical role in provisioning virtual machines, as it impacts levels of over-commitment and associated application performance. Typically, file accesses from a virtual machine traverse through different levels of page caches, which consume memory. Different configurations of page cache provisioning are possible, each providing different levels of memory utilization and performance levels. In this work, we study different page cache provisioning options with the KVM (Kernel Virtual Machine) virtual machine monitor solution. Our goal is to systematically understand possible provisioning use cases to compare their cost-benefit tradeoffs. Towards this we implement and evaluate tmem, an exclusive caching model (based on the transcendent memory model) for file blocks. Together with the tmem-caching model and existing page cache provisioning options, we present an empirical analysis of all cases. Our evaluation focuses on identifying actual caching needs, overheads and benefits for different combinations and identifies the relative benefits of each. We find that there is up-to 10x increase in disk read throughput with tmem-based caching and the CPU overheads for this technique are proportional to the gain in throughput.

[1]  Steven Hand,et al.  Satori: Enlightened Page Sharing , 2009, USENIX Annual Technical Conference.

[2]  Rusty Russell,et al.  virtio: towards a de-facto standard for virtual I/O devices , 2008, OPSR.

[3]  Oracle Corp,et al.  Memory Overcommit… without the commitment , 2008 .

[4]  A. Kivity,et al.  kvm : the Linux Virtual Machine Monitor , 2007 .

[5]  Tzi-cker Chiueh,et al.  Working Set-based Physical Memory Ballooning , 2013, ICAC.

[6]  H. Howie Huang,et al.  Mortar: filling the gaps in data center memory , 2014, VEE '14.

[7]  George Varghese,et al.  Difference engine , 2010, OSDI.

[8]  Chris Mason,et al.  Transcendent Memory and Linux , 2006 .

[9]  Nimrod Megiddo,et al.  ARC: A Self-Tuning, Low Overhead Replacement Cache , 2003, FAST.

[10]  Andrea C. Arpaci-Dusseau,et al.  Geiger: monitoring the buffer cache in a virtual machine environment , 2006, ASPLOS XII.

[11]  Gustavo Alonso,et al.  Application level ballooning for efficient server consolidation , 2013, EuroSys '13.

[12]  Fabrice Bellard,et al.  QEMU, a Fast and Portable Dynamic Translator , 2005, USENIX ATC, FREENIX Track.

[13]  Carl A. Waldspurger,et al.  Memory resource management in VMware ESX server , 2002, OSDI '02.

[14]  Zhenlin Wang,et al.  Dynamic memory balancing for virtual machines , 2009, OPSR.

[15]  Yuanyuan Zhou,et al.  The Multi-Queue Replacement Algorithm for Second Level Buffer Caches , 2001, USENIX Annual Technical Conference, General Track.

[16]  Zhe Zhang,et al.  Small Is Big: Functionally Partitioned File Caching in Virtualized Environments , 2012, HotCloud.

[17]  Peter J. Denning,et al.  The working set model for program behavior , 1968, CACM.

[18]  Prateek Sharma,et al.  Singleton: system-wide page deduplication in virtual environments , 2012, HPDC '12.

[19]  Frank Bellosa,et al.  XLH: More Effective Memory Deduplication Scanners Through Cross-layer Hints , 2013, USENIX Annual Technical Conference.