Specifying Concurrent Problems: Beyond Linearizability and up to Tasks - (Extended Abstract)

Tasks and objects are two predominant ways of specifying distributed problems. A task specifies for each set of processes which may run concurrently the valid outputs of the processes. An object specifies the outputs the object may produce when it is accessed sequentially. Each one requires its own implementation notion, to tell when an execution satisfies the specification. For objects linearizability is commonly used, while for tasks implementation notions are less explored. Sequential specifications are very convenient, especially important is the locality property of linearizability, which states that linearizable objects compose for free into a linearizable object. However, most well-known tasks have no sequential specification. Also, tasks have no clear locality property. The paper introduces the notion of interval-sequential object. The corresponding implementation notion of interval-linearizability generalizes linearizability. Interval-linearizability allows to specify any task. However, there are sequential one-shot objects that cannot be expressed as tasks, under the simplest interpretation of a task. The paper also shows that a natural extension of the notion of a task is expressive enough to specify any interval-sequential object.

[1]  Hongseok Yang,et al.  Show No Weakness: Sequentially Consistent Specifications of TSO Libraries , 2012, DISC.

[2]  Michel Raynal,et al.  Specifying Concurrent Problems: Beyond Linearizability , 2015, ArXiv.

[3]  Eli Gafni Snapshot for Time: The One-Shot Case , 2014, ArXiv.

[4]  Noam Rinetzky,et al.  Brief announcement: concurrency-aware linearizability , 2014, PODC '14.

[5]  Gil Neiger,et al.  Set-linearizability , 1994, PODC '94.

[6]  Peter W. O'Hearn,et al.  Abstraction for concurrent objects , 2009, Theor. Comput. Sci..

[7]  Faith Ellen,et al.  Tight Bounds for Adopt-Commit Objects , 2013, Theory of Computing Systems.

[8]  Eli Gafni,et al.  Immediate atomic snapshots and fast renaming , 1993, PODC '93.

[9]  Nancy A. Lynch,et al.  The BG distributed simulation algorithm , 2001, Distributed Computing.

[10]  Nir Shavit,et al.  Atomic snapshots of shared memory , 1990, JACM.

[11]  Maurice Herlihy,et al.  The art of multiprocessor programming , 2020, PODC '06.

[12]  Maurice Herlihy,et al.  Linearizability: a correctness condition for concurrent objects , 1990, TOPL.

[13]  Maurice Herlihy,et al.  Elements of Combinatorial Topology , 2014 .

[14]  Sam Toueg,et al.  Generalized Irreducibility of Consensus and the Equivalence of t-Resilient and Wait-Free Implementations of Consensus , 2004, SIAM J. Comput..

[15]  Rob J. van Glabbeek,et al.  On the expressiveness of higher dimensional automata , 2006, Theor. Comput. Sci..

[16]  Eli Gafni,et al.  Round-by-Round Fault Detectors: Unifying Synchrony and Asynchrony (Extended Abstract). , 1998, PODC 1998.

[17]  Michel Raynal,et al.  Power and limits of distributed computing shared memory models , 2013, Theor. Comput. Sci..

[18]  Eli Gafni,et al.  Round-by-round fault detectors (extended abstract): unifying synchrony and asynchrony , 1998, PODC '98.

[19]  Maurice Herlihy,et al.  Distributed Computing Through Combinatorial Topology , 2013 .

[20]  Sergio Rajsbaum,et al.  The Complexity Gap between Consensus and Safe-Consensus - (Extended Abstract) , 2014, SIROCCO.

[21]  Soma Chaudhuri,et al.  More Choices Allow More Faults: Set Consensus Problems in Totally Asynchronous Systems , 1993, Inf. Comput..

[22]  Yehuda Afek,et al.  Tight Group Renaming on Groups of Size g Is Equivalent to g-Consensus , 2009, DISC.

[23]  Roy Friedman,et al.  On the composability of consistency conditions , 2003, Inf. Process. Lett..

[24]  William N. Scherer,et al.  Nonblocking Concurrent Data Structures with Condition Synchronization , 2004, DISC.

[25]  Lukas Furst Concurrent Programming Algorithms Principles And Foundations , 2016 .

[26]  Nir Shavit Data structures in the multicore age , 2011, CACM.