Brief Announcement: Secure Self-Stabilizing Computation

Self-stabilization refers to the ability of systems to recover after temporal violations of conditions required for their correct operation. Such violations may lead the system to an arbitrary state from which it should automatically recover. Today, beyond recovering functionality, there is a need to recover security and confidentiality guarantees as well. To the best of our knowledge, there are currently no self-stabilizing protocols that also ensure recovering confidentiality, authenticity, and integrity properties. Specifically, self-stabilizing systems are designed to regain functionality which is, roughly speaking, desired input output relation, ignoring the security and confidentiality of computation and its state. Distributed (cryptographic) protocols for generic secure and privacy-preserving computation, e.g., secure Multi-Party Computation (MPC), usually ensure secrecy of inputs and outputs, and correctness of computation when the adversary is limited to compromise only a fraction of the components in the system, e.g., the computation is secure only in the presence of an honest majority of involved parties. While there are MPC protocols that are secure against a dishonest majority, in reality, the adversary may compromise all components of the system for a while; some of the corrupted components may then recover, e.g., due to security patches and software updates, or periodical code refresh and local state consistency check and enforcement based on self-stabilizing hardware and software techniques. It is currently unclear if a system and its state can be designed to always fully recover following such individual asynchronous recoveries. This paper introduces Secure Self-stabilizing Computation which answers this question in the affirmative. Secure self-stabilizing computation design ensures that secrecy of inputs and outputs, and correctness of the computation are automatically regained, even if at some point the entire system is compromised. We consider the distributed computation task as the implementation of virtual global finite satiate machine (FSM) to present commonly realized computation. The FSM is designed to regain consistency and security in the presence of a minority of Byzantine participants, e.g., one third of the parties, and following a temporary corruption of the entire system. We use this task and settings to demonstrate the definition of secure self-stabilizing computation. We show how our algorithms and system autonomously restore security and confidentiality of the computation of the FSM once the required corruption thresholds are again respected.

[1]  Jennifer L. Welch,et al.  Self-Stabilizing Clock Synchronization in the Presence of ByzantineFaults ( Preliminary Version ) Shlomi Dolevy , 1995 .

[2]  Shlomi Dolev,et al.  Self-Stabilizing Virtual Machine Hypervisor Architecture for Resilient Cloud , 2014, 2014 IEEE World Congress on Services.

[3]  Sandro Coretti,et al.  Probabilistic Termination and Composability of Cryptographic Protocols , 2016, CRYPTO.

[4]  Silvio Micali,et al.  How to play ANY mental game , 1987, STOC.

[5]  Adi Shamir,et al.  How to share a secret , 1979, CACM.

[6]  Yoram Moses,et al.  Fully Polynomial Byzantine Agreement for n > 3t Processors in t + 1 Rounds , 1998, SIAM J. Comput..

[7]  Shlomi Dolev,et al.  Self Stabilization , 2004, J. Aerosp. Comput. Inf. Commun..

[8]  Paul Feldman,et al.  A practical scheme for non-interactive verifiable secret sharing , 1987, 28th Annual Symposium on Foundations of Computer Science (sfcs 1987).

[9]  Ueli Maurer,et al.  Secure multi-party computation made simple , 2002, Discret. Appl. Math..

[10]  Rafail Ostrovsky,et al.  Proactive Secret Sharing with a Dishonest Majority , 2016, SCN.

[11]  Shlomi Dolev,et al.  Stabilization Enabling Technology , 2006, SSS.

[12]  Elwyn R. Berlekamp,et al.  Algebraic coding theory , 1984, McGraw-Hill series in systems science.