Linearizability is the gold standard among algorithm designers for deducing the correctness of a distributed algorithm using implemented shared objects from the correctness of the corresponding algorithm using atomic versions of the same objects. We show that linearizability does not suffice for this purpose when processes can exploit randomization, and we discuss the existence of alternative correctness conditions. This paper makes the following contributions: 1. Various examples demonstrate that using well-known linearizable implementations of objects (e.g., snapshots) in place of atomic objects can change the probability distribution of the outcomes that the adversary is able to generate. In some cases, an oblivious adversary can create a probability distribution of outcomes for an algorithm with implemented, linearizable objects, that not even a strong adversary can generate for the same algorithm with atomic objects. 2. A new correctness condition for shared object implementations, called strong inearizability, is defined. We prove that a strong adversary (i.e., one that sees the outcome of each coin flip immediately) gains no additional power when atomic objects are replaced by strongly linearizable implementations. In general, no strictly weaker correctness condition suffices to ensure this. We also show that strong linearizability is a local and composable property. 3. In contrast to the situation for the strong adversary, for a natural weaker adversary (one that cannot see a process' coin flip until its next operation on a shared object) we prove that there is no correspondingly general correctness condition. Specifically, any linearizable implementation of counters called terminating. from atomic registers and load-linked/store-conditional objects, that satisfies a natural locality property, necessarily gives the weak adversary more power than it has with atomic counters.
[1]
Amos Israeli,et al.
On processor coordination using asynchronous hardware
,
1987,
PODC '87.
[2]
Karl R. Abrahamson.
On achieving consensus using a shared memory
,
1988,
PODC '88.
[3]
Maurice Herlihy,et al.
Linearizability: a correctness condition for concurrent objects
,
1990,
TOPL.
[4]
Wojciech M. Golab,et al.
Constant-RMR implementations of CAS and other synchronization primitives using read and write operations
,
2007,
PODC '07.
[5]
Baruch Awerbuch,et al.
Atomic shared register access by asynchronous hardware
,
1986,
27th Annual Symposium on Foundations of Computer Science (sfcs 1986).
[6]
D. M. Hutton,et al.
The Art of Multiprocessor Programming
,
2008
.
[7]
Krishnamurthy Vidyasankar.
Converting Lamport's Regular Register to Atomic Register
,
1988,
Inf. Process. Lett..
[8]
James Aspnes,et al.
Randomized protocols for asynchronous consensus
,
2002,
Distributed Computing.
[9]
Amos Israeli,et al.
Bounded time-stamps
,
1987,
28th Annual Symposium on Foundations of Computer Science (sfcs 1987).
[10]
James H. Anderson,et al.
Atomic Semantics of Nonatomic Programs
,
1988,
Inf. Process. Lett..
[11]
Nancy A. Lynch,et al.
Distributed Algorithms
,
1992,
Lecture Notes in Computer Science.
[12]
Maurice Herlihy,et al.
Obstruction-free synchronization: double-ended queues as an example
,
2003,
23rd International Conference on Distributed Computing Systems, 2003. Proceedings..
[13]
Maurice Herlihy,et al.
Wait-free synchronization
,
1991,
TOPL.
[14]
James H. Anderson,et al.
A fast, scalable mutual exclusion algorithm
,
1995,
Distributed Computing.