SILK+ Preventing Latency Spikes in Log-Structured Merge Key-Value Stores Running Heterogeneous Workloads

Log-Structured Merge Key-Value stores (LSM KVs) are designed to offer good write performance, by capturing client writes in memory, and only later flushing them to storage. Writes are later compacted into a tree-like data structure on disk to improve read performance and to reduce storage space use. It has been widely documented that compactions severely hamper throughput. Various optimizations have successfully dealt with this problem. These techniques include, among others, rate-limiting flushes and compactions, selecting among compactions for maximum effect, and limiting compactions to the highest level by so-called fragmented LSMs. In this article, we focus on latencies rather than throughput. We first document the fact that LSM KVs exhibit high tail latencies. The techniques that have been proposed for optimizing throughput do not address this issue, and, in fact, in some cases, exacerbate it. The root cause of these high tail latencies is interference between client writes, flushes, and compactions. Another major cause for tail latency is the heterogeneous nature of the workloads in terms of operation mix and item sizes whereby a few more computationally heavy requests slow down the vast majority of smaller requests. We introduce the notion of an Input/Output (I/O) bandwidth scheduler for an LSM-based KV store to reduce tail latency caused by interference of flushing and compactions and by workload heterogeneity. We explore three techniques as part of this I/O scheduler: (1) opportunistically allocating more bandwidth to internal operations during periods of low load, (2) prioritizing flushes and compactions at the lower levels of the tree, and (3) separating client requests by size and by data access path. SILK+ is a new open-source LSM KV that incorporates this notion of an I/O scheduler.

[1]  Wei Sun,et al.  Workload-aware load balancing for clustered Web servers , 2005, IEEE Transactions on Parallel and Distributed Systems.

[2]  Jason Cong,et al.  Atlas: Baidu's key-value storage system for cloud data , 2015, 2015 31st Symposium on Mass Storage Systems and Technologies (MSST).

[3]  Hong Jiang,et al.  LSM-Tree Managed Storage for Large-Scale Key-Value Store , 2017, IEEE Transactions on Parallel and Distributed Systems.

[4]  Marco Canini,et al.  Rein: Taming Tail Latency in Key-Value Stores via Multiget Scheduling , 2017, EuroSys.

[5]  Fei Wu,et al.  Building Efficient Key-Value Stores via a Lightweight Compaction Tree , 2017, ACM Trans. Storage.

[6]  Anne-Marie Kermarrec,et al.  Hawk: Hybrid Datacenter Scheduling , 2015, USENIX Annual Technical Conference.

[7]  Yongkun Li,et al.  Enabling Efficient Updates in KV Storage via Hashing , 2018, USENIX Annual Technical Conference.

[8]  Christina Delimitrou,et al.  Amdahl's law for tail latency , 2018, Commun. ACM.

[9]  Kai Ren,et al.  SlimDB: A Space-Efficient Key-Value Storage Engine For Semi-Sorted Data , 2017, Proc. VLDB Endow..

[10]  Michael A. Bender,et al.  BetrFS: Write-Optimization in a Kernel File System , 2015, ACM Trans. Storage.

[11]  Bettina Kemme,et al.  Compaction Management in Distributed Key-Value Datastores , 2015, Proc. VLDB Endow..

[12]  Pilar González-Férez,et al.  Tucana: Design and Implementation of a Fast and Efficient Scale-up Key-value Store , 2016, USENIX ATC.

[13]  Jin-Soo Kim,et al.  ForestDB: A Fast Key-Value Storage System for Variable-Length String Keys , 2016, IEEE Transactions on Computers.

[14]  Rachid Guerraoui,et al.  TRIAD: Creating Synergies Between Memory, Disk and Log in Log Structured Key-Value Stores , 2017, USENIX Annual Technical Conference.

[15]  Thomas F. Wenisch,et al.  Thin servers with smart pipes: designing SoC accelerators for memcached , 2013, ISCA.

[16]  Leonard Kleinrock,et al.  Theory, Volume 1, Queueing Systems , 1975 .

[17]  Adam Silberstein,et al.  Benchmarking cloud serving systems with YCSB , 2010, SoCC '10.

[18]  Sachin Katti,et al.  Flashield: a Hybrid Key-value Cache that Controls Flash Write Amplification , 2019, NSDI.

[19]  Willy Zwaenepoel,et al.  Job-aware Scheduling in Eagle: Divide and Stick to Your Probes , 2016, SoCC.

[20]  Gerth Stølting Brodal,et al.  Lower bounds for external memory dictionaries , 2003, SODA '03.

[21]  Andrea C. Arpaci-Dusseau,et al.  Redesigning LSMs for Nonvolatile Memory with NoveLSM , 2018, USENIX Annual Technical Conference.

[22]  Rachid Guerraoui,et al.  FloDB: Unlocking Memory in Persistent Key-Value Stores , 2017, EuroSys.

[23]  Hyeontaek Lim,et al.  Towards Accurate and Fast Evaluation of Multi-Stage Log-structured Designs , 2016, FAST.

[24]  Idit Keidar,et al.  Scaling concurrent log-structured data stores , 2015, EuroSys.

[25]  Manos Athanassoulis,et al.  Monkey: Optimal Navigable Key-Value Store , 2017, SIGMOD Conference.

[26]  Mor Harchol-Balter,et al.  Performance Modeling and Design of Computer Systems: Queueing Theory in Action , 2013 .

[27]  Hai Huang,et al.  BESPOKV: Application Tailored Scale-Out Key-Value Stores , 2018, SC18: International Conference for High Performance Computing, Networking, Storage and Analysis.

[28]  Willy Zwaenepoel,et al.  Size-aware Sharding For Improving Tail Latencies in In-memory Key-value Stores , 2018, NSDI.

[29]  Prashant Malik,et al.  Cassandra: a decentralized structured storage system , 2010, OPSR.

[30]  Stratos Idreos,et al.  Dostoevsky: Better Space-Time Trade-Offs for LSM-Tree Based Key-Value Stores via Adaptive Removal of Superfluous Merging , 2018, SIGMOD Conference.

[31]  Dan Feng,et al.  Locality-Sensitive Bloom Filter for Approximate Membership Query , 2012, IEEE Transactions on Computers.

[32]  Hong Jiang,et al.  SifrDB: A Unified Solution for Write-Optimized Key-Value Stores in Large Datacenter , 2018, SoCC.

[33]  Song Jiang,et al.  LSM-trie: An LSM-tree-based Ultra-Large Key-Value Store for Small Data Items , 2015, USENIX Annual Technical Conference.

[34]  Fred Douglis,et al.  Beating the I/O bottleneck: a case for log-structured file systems , 1989, OPSR.

[35]  Idit Keidar,et al.  Accordion: Better Memory Organization for LSM Key-Value Stores , 2018, Proc. VLDB Endow..

[36]  Willy Zwaenepoel,et al.  Kairos: Preemptive Data Center Scheduling Without Runtime Estimates , 2018, SoCC.

[37]  Christoforos E. Kozyrakis,et al.  Shinjuku: Preemptive Scheduling for μsecond-scale Tail Latency , 2019, NSDI.

[38]  Raghu Ramakrishnan,et al.  bLSM: a general purpose log structured merge tree , 2012, SIGMOD Conference.

[39]  Andrea C. Arpaci-Dusseau,et al.  WiscKey: Separating Keys from Values in SSD-conscious Storage , 2016, FAST.

[40]  Michael A. Bender,et al.  An Introduction to Bε-trees and Write-Optimization , 2015, login Usenix Mag..

[41]  Tony Savor,et al.  Optimizing Space Amplification in RocksDB , 2017, CIDR.

[42]  Willy Zwaenepoel,et al.  KVell: the design and implementation of a fast persistent key-value store , 2019, SOSP.

[43]  Patrick E. O'Neil,et al.  The log-structured merge-tree (LSM-tree) , 1996, Acta Informatica.

[44]  Song Jiang,et al.  Workload analysis of a large-scale key-value store , 2012, SIGMETRICS '12.

[45]  Burton H. Bloom,et al.  Space/time trade-offs in hash coding with allowable errors , 1970, CACM.

[46]  Ittai Abraham,et al.  PebblesDB: Building Key-Value Stores using Fragmented Log-Structured Merge Trees , 2017, SOSP.

[47]  DidonaDiego,et al.  SILK+ Preventing Latency Spikes in Log-Structured Merge Key-Value Stores Running Heterogeneous Workloads , 2020 .

[48]  Ling Liu,et al.  Scaling Out to a Single-Node 80Gbps Memcached Server with 40Terabytes of Memory , 2015, HotStorage.

[49]  Mor Harchol-Balter,et al.  Analysis of SRPT scheduling: investigating unfairness , 2001, SIGMETRICS '01.

[50]  Jason Cong,et al.  An efficient design and implementation of LSM-tree based key-value store on open-channel SSD , 2014, EuroSys '14.

[51]  Dejan S. Milojicic,et al.  Concurrent Log-Structured Memory for Many-Core Key-Value Stores , 2017, Proc. VLDB Endow..