Design and Evaluation of a Distributed Cache Architecture with Prediction

We propose a secondary cache architecture that combines a predictive fetch strategy with a distributed cache to build a high performance memory system. The cache is partitioned into smaller units and distributed evenly in the main memory space. The architecture offers high bandwidth between the cache and the DRAM memory. A hardware prediction scheme is used to prefetch data into the cache and hide the high DRAM latency. The prediction scheme does not rely on any predetermined data access patterns and is completely transparent to the user. Simulation of our architecture on a set of benchmark programs showed a 40\%-90\% improvement in the effective memory access time when compared to traditional caching.