Database system benchmark frameworks like the Yahoo! Cloud Serving Benchmark (YCSB) play a crucial role in the process of developing and tuning data management systems. YCSB has become the de facto standard for performance evaluation of scalable NoSQL database systems. However, its initial design is prone to skipping important latency measurements. This phenomenon is known as the coordinated omission problem and occurs in almost all load generators and monitoring tools. A recent revision of the YCSB code base addresses this particular problem, but does not actually solve it. In this paper we present the latency measurement scheme of NoSQLMark, our own YCSB-based scalable benchmark framework that completely avoids coordinated omission and show that NoSQLMark produces more accurate results using our validation tool SickStore and the distributed data store Cassandra.
[1]
Norbert Ritter,et al.
Who Watches the Watchmen? On the Lack of Validation in NoSQL Benchmarking
,
2015,
BTW.
[2]
Sumita Barahmand,et al.
Benchmarking interactive social networking actions
,
2014
.
[3]
Adam Wierman,et al.
Open Versus Closed: A Cautionary Tale
,
2006,
NSDI.
[4]
Adam Silberstein,et al.
Benchmarking cloud serving systems with YCSB
,
2010,
SoCC '10.
[5]
Norbert Ritter,et al.
NoSQL OLTP Benchmarking: A Survey
,
2014,
GI-Jahrestagung.
[6]
Jörn Kuhlenkamp,et al.
Benchmarking Scalability and Elasticity of Distributed Database Systems
,
2014,
Proc. VLDB Endow..
[7]
Norbert Ritter,et al.
NoSQL database systems: a survey and decision guidance
,
2017,
Computer Science - Research and Development.