Lifetime-aware LRU promotion policy for last-level cache

The traditional LRU replacement policy is susceptible to memory-intensive workloads with large non-reused data like thrashing applications and scan applications. For such workloads, the majority of cache blocks don't get any cache hits during residing in the cache. Cache performance can be improved by reducing the interference from non-reused data. Therefore, the lifetime of other blocks is increased and it can contribute to cache hit. We propose a Lifetime-aware LRU Promotion Policy and show that changing the promotion policy can effective reduce cache miss in the last-level cache. Our promotion policy dynamically adjusts promotion strategy and increases the lifetime for useful cache blocks. The experimental results show that our proposal reduces the average MPKI by 6% and 9% over EAF and DIP, respectively. In multicore, we also improve the performance and reduce the MPKI.

[1]  Using Aggressor Thread Information to Improve Shared Cache Management for CMPs , 2009, 2009 18th International Conference on Parallel Architectures and Compilation Techniques.

[2]  David R. Kaeli,et al.  Multi2Sim: A simulation framework for CPU-GPU computing , 2012, 2012 21st International Conference on Parallel Architectures and Compilation Techniques (PACT).

[3]  Zhe Wang,et al.  Decoupled dynamic cache segmentation , 2012, IEEE International Symposium on High-Performance Comp Architecture.

[4]  Babak Falsafi,et al.  Dead-block prediction & dead-block correlating prefetchers , 2001, ISCA 2001.

[5]  Chao Wang,et al.  Cache Promotion Policy Using Re-reference Interval Prediction , 2012, 2012 IEEE International Conference on Cluster Computing.

[6]  J. Spencer Love,et al.  Caching strategies to improve disk system performance , 1994, Computer.

[7]  Gabriel H. Loh,et al.  PIPP: promotion/insertion pseudo-partitioning of multi-core shared caches , 2009, ISCA '09.

[8]  Aamer Jaleel,et al.  Achieving Non-Inclusive Cache Performance with Inclusive Caches: Temporal Locality Aware (TLA) Cache Management Policies , 2010, 2010 43rd Annual IEEE/ACM International Symposium on Microarchitecture.

[9]  Margaret Martonosi,et al.  Timekeeping in the memory system: predicting and optimizing memory behavior , 2002, ISCA.

[10]  Aamer Jaleel,et al.  Adaptive insertion policies for managing shared caches , 2008, 2008 International Conference on Parallel Architectures and Compilation Techniques (PACT).

[11]  Onur Mutlu,et al.  The evicted-address filter: A unified mechanism to address both cache pollution and thrashing , 2012, 2012 21st International Conference on Parallel Architectures and Compilation Techniques (PACT).

[12]  Aamer Jaleel,et al.  Adaptive insertion policies for high performance caching , 2007, ISCA '07.

[13]  Mainak Chaudhuri,et al.  Bypass and insertion algorithms for exclusive last-level caches , 2011, 2011 38th Annual International Symposium on Computer Architecture (ISCA).

[14]  Yan Solihin,et al.  Counter-Based Cache Replacement and Bypassing Algorithms , 2008, IEEE Transactions on Computers.