Transparent Computation and Correlated Equilibrium ∗

Achieving correlated equilibrium is an important and extensively investigated problem at the intersection of many fields: in particular, game theory, cryptography and efficient algorithms. Thus far, however, perfectly rational solutions have been lacking, and the problem has been formulated with somewhat limited objectives. In this paper, we • Provide a stronger and more general notion of correlated-equilibrium achievement; and • Provide more rational solutions in this more demanding framework. We obtain our game theoretic results by putting forward and exemplifying a stronger notion of secure computation. Traditionally, secure computation replaces a trusted party by multiple players computing on their separate shares of the data. In contrast, we directly replace a trusted party by a transparent device that correctly and privately evaluates any function by performing only public operations on unseen data. To construct such devices, we substantially strengthen the ballot-box techniques of [ILM05]. We demonstrate the additional power of transparent computation by proving that the game-theoretic results of this paper are unachievable by traditionally secure protocols. ∗This material is based upon work supported by the National Science Foundation under Grant SES-0551244. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF). The authors would like to thank Michael Rabin for suggesting the term. 1 Game Theoretic Background and Correlated Equilibrium Normal-Form Games. In a normal-form game G with n players, each player i has (1) his own finite set of actions, Ai, and (2) his own utility function, ui, mapping the action space A = A1 × . . . × An into the real numbers. The action sets and the utility functions of G are common knowledge. The game is played in a single stage, without inter-player communication. Each player i can be thought of as being isolated in his own room, facing a panel of buttons: one for each action in Ai. In such a setting, i plays action ai by pushing the corresponding button, simultaneously with the other players. Letting a = (a1, . . . , an) be the resulting outcome, each player i receives ui(a) as his payoff. A strategy σi for player i is a probability distribution over Ai. Extensive-Form Games. An extensive-form game is played in multiple stages, with the players acting one at a time. Such a game can be depicted as a tree. The root represents the start of the game; an internal node an intermediate stage of the game; and a leaf an ending of the game. To each leaf a vector of utilities of all the players is assigned. The game also specifies which player acts at which node, and the actions available to him. The children of a non-leaf node N correspond to the actions available to the player designated to act at node N . (Extensive-form games may also have a special player, Nature, that, when called to act, plays actions chosen according to fixed probability distributions commonly known to all players.) An extensive-form game is of perfect information when all the players know the exact actions played so far, and thus the current node of the tree. For such games, a strategy of a player i consists of a (possibly probabilistic) function specifying the action to take at any node where i must act. An extensive-form game is of imperfect information if the acting player is not perfectly informed about the actions played so far. Therefore, even though the tree structure of the game, the payoffs of the leaf nodes, and so on, continue to be common knowledge, the acting player no longer knows the exact node he is at. The information in his possession allows him to exclude many nodes, but is compatible with his being at one of several others. Effectively, the nodes of the tree are partitioned into so called information sets. Two nodes belong to the same information set of player i if i cannot distinguish between them. The information sets of different players are disjoint, and the game specifies the acting player at each information set. Rational Play. Solving a game means finding the ways in which rational players can play it. A solution to a normal-form game G is a Nash equilibrium. This is a profile σ = (σ1, . . . , σn) of strategies that are self-reinforcing: that is, no player i has an incentive to deviate from his own strategy if he believes that all other players stick to their own strategies. Formally, σ is a Nash equilibrium if, for all players i and for all strategies σ̂i, ui(σi, σ−i) ≥ ui(σ̂i, σ−i). Nash equilibria could be defined syntactically in the same way for extensive-form games, but they would no longer be impeccably rational. In a normal-form game, the strategies of a Nash equilibrium σ are best responses to each other in a setting where all players act simultaneously. The players cannot see whether the others act according to their equilibrium strategies or deviate. In an extensive-form game, instead, players act over time, they may observe actions of the others inconsistent with equilibrium strategies and react accordingly. In a Nash equilibrium σ, however, σi need not be i’s best response if i notices that j deviated from σj . For example, a Nash equilibrium For comprehensive coverage of game-theoretic concepts see [OR97]. Following standard notation, σ−i denotes the vector of strategies in σ for all players except i, and the utility function ui evaluated on a vector of n strategies —rather than an outcome of n actions— refers to i’s expected utility, arising from the n underlying distributions.

[1]  R. Aumann Subjectivity and Correlation in Randomized Strategies , 1974 .

[2]  Baruch Awerbuch,et al.  Verifiable secret sharing and achieving simultaneity in the presence of faults , 1985, 26th Annual Symposium on Foundations of Computer Science (sfcs 1985).

[3]  David A. Mix Barrington,et al.  Bounded-width polynomial-size branching programs recognize exactly those languages in NC1 , 1986, STOC '86.

[4]  Silvio Micali,et al.  How to play ANY mental game , 1987, STOC.

[5]  Imre Bárány,et al.  Fair Distribution Protocols or How the Players Replace Fortune , 1992, Math. Oper. Res..

[6]  Elchanan Ben-Porath,et al.  Correlation without Mediation: Expanding the Set of Equilibrium Outcomes by "Cheap" Pre-play Procedures , 1998 .

[7]  Birgit Pfitzmann,et al.  Composition and integrity preservation of secure reactive systems , 2000, CCS.

[8]  Shai Halevi,et al.  A Cryptographic Solution to a Game Theoretic Problem , 2000, CRYPTO.

[9]  Silvio Micali,et al.  Parallel Reducibility for Information-Theoretically Secure Computation , 2000, CRYPTO.

[10]  Ran Canetti,et al.  Universally composable security: a new paradigm for cryptographic protocols , 2001, Proceedings 2001 IEEE International Conference on Cluster Computing.

[11]  José E. Vila,et al.  Computational complexity and communication: Coordination in two-player games , 2002 .

[12]  Siva Sai Yerubandi,et al.  Differential Power Analysis , 2002 .

[13]  Dakshi Agrawal,et al.  The EM Side-Channel(s) , 2002, CHES.

[14]  Silvio Micali,et al.  Physically Observable Cryptography , 2003, IACR Cryptol. ePrint Arch..

[15]  Yuval Ishai,et al.  Private computation using a PEZ dispenser , 2003, Theor. Comput. Sci..

[16]  Yehuda Lindell,et al.  A Proof of Yao's Protocol for Secure Two-Party Computation , 2004, Electron. Colloquium Comput. Complex..

[17]  Abhi Shelat,et al.  Completely fair SFE and coalition-safe cheap talk , 2004, PODC '04.

[18]  Dino Gerardi,et al.  Unmediated Communication in Games with Complete and Incomplete Information , 2002, J. Econ. Theory.

[19]  Joseph Y. Halpern,et al.  Ra-tional secret sharing and multiparty computation , 2004, STOC 2004.

[20]  Christos H. Papadimitriou,et al.  Computing correlated equilibria in multi-player games , 2005, STOC '05.

[21]  Timothy N. Cason,et al.  Secure Implementation Experiments: Do Strategy-proof Mechanisms Really Work? † , 2003 .

[22]  Jonathan Katz,et al.  Rational Secret Sharing, Revisited , 2006, SCN.