A Case for Alternative Nested Paging Models for Virtualized Systems

Address translation often emerges as a critical performance bottleneck for virtualized systems and has recently been the impetus for hardware paging mechanisms. These mechanisms apply similar translation models for both guest and host address translations. We make an important observation that the model employed to translate from guest physical addresses (GPAs) to host physical addresses (HPAs) is orthogonal to the model used to translate guest virtual addresses (GVAs) to GPAs. Changing this model requires VMM cooperation, but has no implications for guest OS compatibility. As an example, we consider ahashed page table approach for GPA→HPA translation. Nested paging, widely considered the most promising approach, uses unhashed multi-level forward page tables for both GVA→GPA and GPA→HPA translations, resulting in a potential O(n2 ) page walk cost on a TLB miss, for n-level page tables. In contrast, the hashed page table approach results in an expected O(n) cost. Our simulation results show that when a hashed page table is used in the nested level, the performance of the memory system is not worse, and sometimes even better than a nested forward mapped page table due to reduced page walks and cache pressure. This showcases the potential for alternative paging mechanisms.