Can every randomized algorithm be derandomized?

Among the most important modern algorithmic techniques is the use of random decisions. Starting in the 1970's, many of the most significant results were randomized algorithms solving basic compuatational problems that had (to that time) resisted efficient deterministic computation. (Ber72, SS79, Rab80, Sch80, Zip79, AKLLR). In contrast, many of the most exciting recent work has been on derandomizing these same algorithms, coming up with efficient deterministic versions, e.g., (AKS02, Rein05). This raises the question, can such results be obtained for all randomized algorithms? Will the remaining classical randomized algorithms be derandomized by similar techniques?Clear but complicated answers to these questions have emerged from complexity-theoretic studies of randomized complexity classes (e.g., RP and BPP) and pseudo-random generators. These questions are inextricably linked to another basic problem in complexity: which functions require large circuits to compute?In this talk, we'll survey some results from the theory of derandomization. I'll stress connections to other questions, especially circuit complexity, explicit extractors, hardness amplification, and error-correcting codes. Much of the talk is based on joint work with Valentine Kabanets and Avi Wigderson, but it will also include results by many other researchers.A priori, possibilities concerning the power of randomized algorithms include:Randomization always helps speed up intractable problems, i.e., EXP=BPP.The extent to which randomization helps is problem-specific. Depending on the problem, it can reduce complexity by any amount from not at all to exponentially.True randomness is never needed, and random choices can always be simulated deterministically, i.e., P=BPP..Either of the last two possibilities seem plausible, but most consider the first wildly implausible. However, while a strong version of the middle possibility has been ruled out, the implausible first one is still open. Recent results indicate both that the last, P=BPP, is both very likely to be the case and very difficult to prove.More precisely: Either no problem in E has strictly exponential circuit complexity or P=BPP. This seems to be strong evidence that, in fact, P=BPP, since otherwise circuits can always shortcut computation time for hard problems. (NW, BFNW, IW97, STV01, SU01, Uma02).Either BPP=EXP, or any problem in BPP has a deterministic sub-exponential time algorithm that works on almost all instances. In other words, either randomness solves every hard problem, or it does not help exponentially, except on rare instances. This rules out strong problem-dependence, since if randomization helps exponentially for many instances of some problem, we can conclude that it helps exponentially for all intractible problems. (IW98). If RP=P , then either the permanent problem requires super-polynomial algebraic circuits or there is a problem in NEXP that has no polynomial-size Boolean circuit. (IKW01, KI). That is, proving the last possibility requires one to prove a new circuit lower bound, and so is likely to be difficult. (Moreover, we do not need the full hypothesis that P=RP to obtain the same conclusion: it actually suffices that the Schwartz-Zippel identity testing algorithm be derandomizable. Thus, we will not be able to derandomize even the "classic" algorithms without proving circuit lower bounds.)All of these results use the hardness-vs-randomness paradigm introduced by Yao (Yao82, see also BM, Levin): Use a hard computational problem to define a small set of "pseudo-random" strings, that no limited adversary can distinguish from random. Use these "pseudo-random" strings to replace the random choices in a probabilistic algorithm. The algorithm will not have enough time to distinguish the pseudo-random sequences from truly random ones, and so will behave the same as it would given random sequences.

[1]  Avi Wigderson,et al.  In search of an easy witness: exponential time vs. probabilistic polynomial time , 2001, Proceedings 16th Annual IEEE Conference on Computational Complexity.

[2]  Richard J. Lipton,et al.  Random walks, universal traversal sequences, and the complexity of maze problems , 1979, 20th Annual Symposium on Foundations of Computer Science (sfcs 1979).

[3]  Andrew Chi-Chih Yao,et al.  Theory and Applications of Trapdoor Functions (Extended Abstract) , 1982, FOCS.

[4]  Factoring Binomials Factoring Polynomials , 2006 .

[5]  Jacob T. Schwartz,et al.  Fast Probabilistic Algorithms for Verification of Polynomial Identities , 1980, J. ACM.

[6]  L. Valiant Why is Boolean complexity theory difficult , 1992 .

[7]  Luca Trevisan,et al.  Extractors and pseudorandom generators , 2001, JACM.

[8]  Noam Nisan,et al.  BPP has subexponential time simulations unless EXPTIME has publishable proofs , 1991, [1991] Proceedings of the Sixth Annual Structure in Complexity Theory Conference.

[9]  Madhu Sudan,et al.  Decoding of Reed Solomon Codes beyond the Error-Correction Bound , 1997, J. Complex..

[10]  Christopher Umans,et al.  Pseudo-random generators for all hardnesses , 2002, Proceedings 17th IEEE Annual Conference on Computational Complexity.

[11]  Lance Fortnow,et al.  BPP has subexponential time simulations unlessEXPTIME has publishable proofs , 2005, computational complexity.

[12]  David S. Johnson,et al.  The NP-Completeness Column: An Ongoing Guide , 1982, J. Algorithms.

[13]  Leonid A. Levin,et al.  One-way functions and pseudorandom generators , 1985, STOC '85.

[14]  Volker Strassen,et al.  A Fast Monte-Carlo Test for Primality , 1977, SIAM J. Comput..

[15]  Christopher Umans,et al.  Simple extractors for all min-entropies and a new pseudo-random generator , 2001, Proceedings 2001 IEEE International Conference on Cluster Computing.

[16]  Richard Zippel,et al.  Probabilistic algorithms for sparse polynomials , 1979, EUROSAM.

[17]  Russell Impagliazzo,et al.  Derandomizing Polynomial Identity Tests Means Proving Circuit Lower Bounds , 2003, STOC '03.

[18]  M. Rabin Probabilistic algorithm for testing primality , 1980 .

[19]  Avi Wigderson,et al.  Randomness vs. time: de-randomization under a uniform assumption , 1998, Proceedings 39th Annual Symposium on Foundations of Computer Science (Cat. No.98CB36280).

[20]  Michael Sipser,et al.  Expanders, Randomness, or Time versus Space , 1988, J. Comput. Syst. Sci..

[21]  Omer Reingold,et al.  Undirected ST-connectivity in log-space , 2005, STOC '05.

[22]  Luca Trevisan,et al.  Pseudorandom generators without the XOR lemma , 1999, Proceedings. Fourteenth Annual IEEE Conference on Computational Complexity (Formerly: Structure in Complexity Theory Conference) (Cat.No.99CB36317).

[23]  Noam Nisan,et al.  Randomness is Linear in Space , 1996, J. Comput. Syst. Sci..

[24]  Manindra Agrawal,et al.  PRIMES is in P , 2004 .

[25]  Manuel Blum,et al.  How to generate cryptographically strong sequences of pseudo random bits , 1982, 23rd Annual Symposium on Foundations of Computer Science (sfcs 1982).

[26]  Noam Nisan,et al.  Hardness vs. randomness , 1988, [Proceedings 1988] 29th Annual Symposium on Foundations of Computer Science.

[27]  Miklos Santha,et al.  Generating Quasi-Random Sequences from Slightly-Random Sources (Extended Abstract) , 1984, FOCS.

[28]  Luca Trevisan,et al.  List-decoding using the XOR lemma , 2003, 44th Annual IEEE Symposium on Foundations of Computer Science, 2003. Proceedings..

[29]  Avi Wigderson,et al.  P = BPP if E requires exponential circuits: derandomizing the XOR lemma , 1997, STOC '97.

[30]  David Zuckerman Simulating BPP using a general weak random source , 2005, Algorithmica.