Student research poster: A low complexity cache sharing mechanism to address system fairness
暂无分享,去创建一个
Shared caches have become, de facto, the common design choice in current multi-cores, ranging from embedded devices to high-performance processors. In these systems, requests from multiple applications compete for the cache resources, degrading to different extents their progress, quantified as the performance of individual applications compared to isolated execution. The difference between the progresses of the running applications yields the system to unpredictable behavior and causes a fairness problem. This problem can be addressed by carefully partitioning cache resources among the contending applications, but to be effective, a partitioning approach needs to estimate the per-application progress.
[1] Onur Mutlu,et al. The application slowdown model: Quantifying and controlling the impact of inter-application interference at shared caches and main memory , 2015, 2015 48th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO).
[2] Stijn Eyerman,et al. Per-thread cycle accounting in multicore processors , 2013, TACO.
[3] Yale N. Patt,et al. Utility-Based Cache Partitioning: A Low-Overhead, High-Performance, Runtime Mechanism to Partition Shared Caches , 2006, 2006 39th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO'06).