Hybrid Multi-level Cache Management Policy

Recently, multi-level cache became more popular due to its better performance than single level cache. It is very useful in distributed and parallel systems where number of applications is running at a time. The data required by application program is easily available in the cache due to larger size of the cache. Until now, many multi-level cache management policies LRU-K [15], PROMOTE [1], DEMOTE [5] has been developed but still there is performance issue. The main difficulty of these policies is selecting a victim. In this paper, a new policy is suggested which uses compressed caching and selects a victim depending upon three factors. First is how many times the promotion or the demotion of the cache block has done, second is size of the cache block to be replaced, and the third is recency of the block in the cache memory[1]. This policy is expected to exhibit better hit ratio than all the previously existing multi-level cache management policies.

[1]  Jim Zelenka,et al.  Informed prefetching and caching , 1995, SOSP.

[2]  J. Spencer Love,et al.  Caching strategies to improve disk system performance , 1994, Computer.

[3]  Song Jiang,et al.  LIRS: an efficient low inter-reference recency set replacement policy to improve buffer cache performance , 2002, SIGMETRICS '02.

[4]  Zhan-sheng Li,et al.  CRFP: A Novel Adaptive Replacement Policy Combined the LRU and LFU Policies , 2008, 2008 IEEE 8th International Conference on Computer and Information Technology Workshops.

[5]  Chentao Wu,et al.  Hint-K: An Efficient Multilevel Cache Using K-Step Hints , 2014, IEEE Trans. Parallel Distributed Syst..

[6]  J. T. Robinson,et al.  Data cache management using frequency-based replacement , 1990, SIGMETRICS '90.

[7]  Sang Lyul Min,et al.  LRFU: A Spectrum of Policies that Subsumes the Least Recently Used and Least Frequently Used Policies , 2001, IEEE Trans. Computers.

[8]  Yuanyuan Zhou,et al.  Eviction-based Cache Placement for Storage Caches , 2003, USENIX Annual Technical Conference, General Track.

[9]  Yannis Smaragdakis,et al.  EELRU: simple and effective adaptive page replacement , 1999, SIGMETRICS '99.

[10]  Urmila Shrawankar,et al.  Block pattern based buffer cache management , 2013, 2013 8th International Conference on Computer Science & Education.

[11]  Yan Solihin,et al.  Counter-Based Cache Replacement and Bypassing Algorithms , 2008, IEEE Transactions on Computers.

[12]  Gerhard Weikum,et al.  The LRU-K page replacement algorithm for database disk buffering , 1993, SIGMOD Conference.

[13]  Butler W. Lampson,et al.  Hints for Computer System Design , 1983, IEEE Software.

[14]  Jongman Kim,et al.  ECM: Effective Capacity Maximizer for high-performance compressed caching , 2013, 2013 IEEE 19th International Symposium on High Performance Computer Architecture (HPCA).

[15]  Pei Cao,et al.  Adaptive page replacement based on memory reference behavior , 1997, SIGMETRICS '97.

[16]  Chentao Wu,et al.  Hint-K: An Efficient Multilevel Cache Using K-Step Hints , 2010, IEEE Transactions on Parallel and Distributed Systems.

[17]  Christian Engelmann,et al.  A unified multiple-level cache for high performance storage systems , 2007, Int. J. High Perform. Comput. Netw..

[18]  Binny S. Gill On Multi-level Exclusive Caching: Offline Optimality and Why Promotions Are Better Than Demotions , 2008, FAST.

[19]  John Wilkes,et al.  My Cache or Yours? Making Storage More Exclusive , 2002, USENIX Annual Technical Conference, General Track.

[20]  Nimrod Megiddo,et al.  Outperforming LRU with an adaptive replacement cache algorithm , 2004, Computer.

[21]  Stephen L. Scott,et al.  A unified multiple-level cache for high performance storage systems , 2005, 13th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems.

[22]  Urmila Shrawankar,et al.  Managing Buffer Cache by Block Access Pattern , 2012 .

[23]  Kai Li,et al.  MC2: Multiple Clients on a Multilevel Cache , 2008, 2008 The 28th International Conference on Distributed Computing Systems.