A cache partitioning mechanism to protect shared data for CMPs

The last-level cache (LLC) of a modern chip-multiprocessor (CMP) keeps two kinds of data: shared data accessed by multiple cores and private data accessed by only one core. Although the former are likely to have a larger performance impact than the latter, the LLC manages both of those data in the same fashion. To realize a highly efficient execution on a CMP, this paper proposes a cache partitioning mechanism to protect shared data from excessive eviction. The evaluation results show that the proposed mechanism improves the performance by up to 76% and by 8% on average at a cost of less than 2% of the LLC hardware.

[1]  S Muthukumar,et al.  Sharing and Hit based Prioritizing Replacement Algorithm for Multi-Threaded Applications , 2014 .

[2]  Onur Mutlu,et al.  Improving cache performance using read-write partitioning , 2014, 2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA).

[3]  Ragavendra Natarajan,et al.  Characterizing multi-threaded applications for designing sharing-aware last-level cache replacement policies , 2013, 2013 IEEE International Symposium on Workload Characterization (IISWC).

[4]  Lieven Eeckhout,et al.  Sniper: Exploring the level of abstraction for scalable and accurate parallel multi-core simulation , 2011, 2011 International Conference for High Performance Computing, Networking, Storage and Analysis (SC).