TaP: Table-based Prefetching for Storage Caches

TaP is a storage cache sequential prefetching and caching technique to improve the read-ahead cache hit rate and system response time. A unique feature of TaP is the use of a table to detect sequential access patterns in the I/O workload and to dynamically determine the optimum prefetch cache size. When compared to some popular prefetching techniques, TaP gives a better hit rate and response time while using a read cache that is often an order of magnitude smaller than that needed by other techniques. TaP is especially efficient when the I/O workload consists of interleaved requests from various applications, where only some of the applications are accessing their data sequentially. For example, TaP achieves the same hit rate as the other techniques with a cache length that is 100 times smaller than the cache needed by other techniques when the interleaved workload consists of 10% sequential application data and 90% random application data.

[1]  Chi-Hung Chi,et al.  Hardware-driven prefetching for pointer data references , 1998, ICS '98.

[2]  Ravi Pendse,et al.  Pre-fetching with the segmented LRU algorithm , 1999, 42nd Midwest Symposium on Circuits and Systems (Cat. No.99CH36356).

[3]  Pen-Chung Yew,et al.  : Data Prefetching In Shared Memory Multiprocessors , 1987, ICPP.

[4]  Peter J. Varman,et al.  Optimal prefetching and caching for parallel I/O sytems , 2001, SPAA '01.

[5]  Jesús Labarta,et al.  Linear aggressive prefetching: a way to increase the performance of cooperative caches , 1999, Proceedings 13th International Parallel Processing Symposium and 10th Symposium on Parallel and Distributed Processing. IPPS/SPDP 1999.

[6]  Jim Griffioen,et al.  Reducing File System Latency using a Predictive Approach , 1994, USENIX Summer.

[7]  S. Daniel,et al.  A portable, open-source implementation of the SPC-1 workload , 2005, IEEE International. 2005 Proceedings of the IEEE Workload Characterization Symposium, 2005..

[8]  Anna R. Karlin,et al.  A study of integrated prefetching and caching strategies , 1995, SIGMETRICS '95/PERFORMANCE '95.

[9]  Bruce McNutt,et al.  A Standard Test of I/O Cache , 2001, Int. CMG Conference.

[10]  Sally A. McKee,et al.  A Quantitative Measure of Memory Reference Regularity , 2001 .

[11]  Alan Jay Smith,et al.  Cache Memories , 1982, CSUR.

[12]  Peter Honeyman,et al.  Multi-level Caching in Distributed File Systems or Your cache ain't nuthin' but trash , 1992 .

[13]  Nimrod Megiddo,et al.  ARC: A Self-Tuning, Low Overhead Replacement Cache , 2003, FAST.

[14]  Janak H. Patel,et al.  Data prefetching in multiprocessor vector cache memories , 1991, ISCA '91.

[15]  Ana Pont,et al.  The Impact of the Web Prefetching Architecture on the Limits of Reducing User's Perceived Latency , 2006, 2006 IEEE/WIC/ACM International Conference on Web Intelligence (WI 2006 Main Conference Proceedings)(WI'06).

[16]  Seung Ryoul Maeng,et al.  An adaptive sequential prefetching scheme in shared-memory multiprocessors , 1997, Proceedings of the 1997 International Conference on Parallel Processing (Cat. No.97TB100162).

[17]  P. Krishnan,et al.  Practical prefetching via data compression , 1993 .

[18]  James K. Archibald,et al.  Multiple Prefetch Adaptive Disk Caching , 1993, IEEE Trans. Knowl. Data Eng..

[19]  Michel Dubois,et al.  Fixed and Adaptive Sequential Prefetching in Shared Memory Multiprocessors , 1993, 1993 International Conference on Parallel Processing - ICPP'93.

[20]  Brian N. Bershad,et al.  A trace-driven comparison of algorithms for parallel prefetching and caching , 1996, OSDI '96.

[21]  Russel Hugo Patterson,et al.  Informed Prefetching and Caching (CMU-CS-97-204) , 1997 .

[22]  Anna R. Karlin,et al.  Implementation and performance of integrated application-controlled file caching, prefetching, and disk scheduling , 1996, TOCS.

[23]  Scott A. Brandt,et al.  ACME: Adaptive Caching Using Multiple Experts , 2002, WDAS.

[24]  Jean-Loup Baer,et al.  Effective Hardware Based Data Prefetching for High-Performance Processors , 1995, IEEE Trans. Computers.

[25]  Dharmendra S. Modha,et al.  SARC: Sequential Prefetching in Adaptive Replacement Cache , 2005, USENIX Annual Technical Conference, General Track.

[26]  K. Kavi Cache Memories Cache Memories in Uniprocessors. Reading versus Writing. Improving Performance , 2022 .

[27]  Hui Lei,et al.  An analytical approach to file prefetching , 1997 .

[28]  Arif Merchant,et al.  Issues and challenges in the performance analysis of real disk arrays , 2004, IEEE Transactions on Parallel and Distributed Systems.

[29]  Luis Angel D. Bathen,et al.  AMP: Adaptive Multi-stream Prefetching in a Shared Cache , 2007, FAST.

[30]  J. Spencer Love,et al.  Caching strategies to improve disk system performance , 1994, Computer.

[31]  Anna R. Karlin,et al.  Near-Optimal Parallel Prefetching and Caching , 2000, SIAM J. Comput..

[32]  Craig Zilles,et al.  Execution-based prediction using speculative slices , 2001, ISCA 2001.

[33]  Song Jiang,et al.  STEP: Sequentiality and Thrashing Detection Based Prefetching to Improve Performance of Networked Storage Servers , 2007, 27th International Conference on Distributed Computing Systems (ICDCS '07).

[34]  Alan Jay Smith,et al.  Sequentiality and prefetching in database systems , 1978, TODS.

[35]  Marc Farley Storage Networking Fundamentals: An Introduction to Storage Devices, Subsystems, Applications, Management, and File Systems (Cisco Press Fundamentals) , 2004 .

[36]  John Wilkes,et al.  My Cache or Yours? Making Storage More Exclusive , 2002, USENIX Annual Technical Conference, General Track.

[37]  Gregory R. Ganger,et al.  The DiskSim Simulation Environment Version 4.0 Reference Manual (CMU-PDL-08-101) , 1998 .

[38]  Marc Farley Storage networking fundamentals : an introduction to storage devices, subsystems, applications, management, and filing systems , 2005 .