Consistency-based service level agreements for cloud storage

Choosing a cloud storage system and specific operations for reading and writing data requires developers to make decisions that trade off consistency for availability and performance. Applications may be locked into a choice that is not ideal for all clients and changing conditions. Pileus is a replicated key-value store that allows applications to declare their consistency and latency priorities via consistency-based service level agreements (SLAs). It dynamically selects which servers to access in order to deliver the best service given the current configuration and system conditions. In application-specific SLAs, developers can request both strong and eventual consistency as well as intermediate guarantees such as read-my-writes. Evaluations running on a worldwide test bed with geo-replicated data show that the system adapts to varying client-server latencies to provide service that matches or exceeds the best static consistency choice and server selection scheme.

[1]  Leslie Lamport,et al.  Time, clocks, and the ordering of events in a distributed system , 1978, CACM.

[2]  Rafael Alonso,et al.  Data caching issues in an information retrieval system , 1990, TODS.

[3]  Marvin Theimer,et al.  Session guarantees for weakly consistent replicated data , 1994, Proceedings of 3rd International Conference on Parallel and Distributed Information Systems.

[4]  Marvin Theimer,et al.  Managing update conflicts in Bayou, a weakly connected replicated storage system , 1995, SOSP.

[5]  Amin Vahdat,et al.  Design and evaluation of a conit-based continuous consistency model for replicated services , 2002, TOCS.

[6]  Jonathan Goldstein,et al.  Relaxed currency and consistency: how to say "good enough" in SQL , 2004, SIGMOD '04.

[7]  Robbert van Renesse,et al.  Chain Replication for Supporting High Throughput and Availability , 2004, OSDI.

[8]  Hector Garcia-Molina,et al.  The demarcation protocol: A technique for maintaining constraints in distributed database systems , 1994, The VLDB Journal.

[9]  Lei Gao,et al.  PRACTI Replication , 2006, NSDI.

[10]  Werner Vogels,et al.  Dynamo: amazon's highly available key-value store , 2007, SOSP.

[11]  Marcos K. Aguilera,et al.  Sinfonia: a new paradigm for building scalable distributed systems , 2007, SOSP.

[12]  Hans-Arno Jacobsen,et al.  PNUTS: Yahoo!'s hosted data serving platform , 2008, Proc. VLDB Endow..

[13]  Wilson C. Hsieh,et al.  Bigtable: A Distributed Storage System for Structured Data , 2006, TOCS.

[14]  Werner Vogels,et al.  Building reliable distributed systems at a worldwide scale demands trade-offs between consistency and availability. , 2022 .

[15]  C. Moallemi,et al.  The Cost of Latency ∗ , 2009 .

[16]  David A. Patterson,et al.  SCADS: Scale-Independent Storage for Social Computing Applications , 2009, CIDR.

[17]  Gustavo Alonso,et al.  Consistency Rationing in the Cloud: Pay only when it matters , 2009, Proc. VLDB Endow..

[18]  Alec Wolman,et al.  Volley: Automated Data Placement for Geo-Distributed Cloud Services , 2010, NSDI.

[19]  F. Junqueira,et al.  Weak consistency as a last resort , 2010, LADIS '10.

[20]  Prashant Malik,et al.  Cassandra: a decentralized structured storage system , 2010, OPSR.

[21]  Adam Silberstein,et al.  Benchmarking cloud serving systems with YCSB , 2010, SoCC '10.

[22]  Jing Xu,et al.  An Application-Based Adaptive Replica Consistency for Cloud Storage , 2010, 2010 Ninth International Conference on Grid and Cloud Computing.

[23]  Karl Aberer,et al.  A self-organized, fault-tolerant and scalable replication scheme for cloud storage , 2010, SoCC '10.

[24]  Xiaozhou Li,et al.  What Consistency Does Your Key-Value Store Actually Provide? , 2010, HotDep.

[25]  Yawei Li,et al.  Megastore: Providing Scalable, Highly Available Storage for Interactive Services , 2011, CIDR.

[26]  Wenchao Zhou,et al.  A batch of PNUTS: experiences connecting cloud batch and serving systems , 2011, SIGMOD '11.

[27]  Kevin Lee,et al.  Data Consistency Properties and the Trade-offs in Commercial Cloud Storage: the Consumers' Perspective , 2011, CIDR.

[28]  Ju Wang,et al.  Windows Azure Storage: a highly available cloud storage service with strong consistency , 2011, SOSP.

[29]  Rick Cattell,et al.  Scalable SQL and NoSQL data stores , 2011, SGMD.

[30]  Michael J. Freedman,et al.  Don't settle for eventual: scalable causal consistency for wide-area storage with COPS , 2011, SOSP.

[31]  Hector Garcia-Molina,et al.  Where in the world is my data? , 2011, Proc. VLDB Endow..

[32]  Xiaozhou Li,et al.  Analyzing consistency properties for fun and profit , 2011, PODC '11.

[33]  David Bermbach,et al.  Eventual consistency: How soon is eventual? An evaluation of Amazon S3's consistency behavior , 2011, MW4SOC '11.

[34]  Christopher Frost,et al.  Spanner: Google's Globally-Distributed Database , 2012, OSDI.

[35]  Cheng Li,et al.  Making geo-replicated systems fast as possible, consistent when necessary , 2012, OSDI 2012.

[36]  Amar Phanishayee,et al.  Flex-KV: enabling high-performance and flexible KV systems , 2012 .

[37]  E. Brewer,et al.  CAP twelve years later: How the "rules" have changed , 2012, Computer.

[38]  Marcos K. Aguilera,et al.  Transactions with Consistency Choices on Geo-Replicated Cloud Storage , 2013 .

[39]  Doug Terry,et al.  Replicated data consistency explained through baseball , 2013, CACM.