Information Theoretic Reasons for Computational Difficulty

We give an intuitive account of the concepts in the title, by considering the following simple number-theoretic example. Imagine two distant players who communicate by exchanging binary messages (bits). One player is given a prime number x, and the second a composite number y, where x,y < 2". The players' task is to find a prime number p, with p < 2n, such that x ^ y (mod p). The existence of such a small prime p is guaranteed by the prime number theorem and the Chinese remainder theorem. The players agree beforehand on a "protocol" for exchanging messages. The protocol dictates to each player what message to send at each point, based on his input and the messages he received so far. It also dictates when to stop, and how do determine the answer from the information received. There is no limit on the computational complexity of these decisions, which are free of charge. The cost of the protocol is the number of bits they have to exchange on the worst case choice of inputs. We shall be interested in the cost of the best protocol under this measure, which we denote by t(n). There is a trivial protocol in which one player sends his input to the second (n bits), who computes the answer and sends it (log n bits) back to the first. This shows that t(n) <,n + log n. How small can t(n) be? Is it possible that t(n) = 0(log n), which is (essentially) the trivial lower bound? At present, these trivial upper and lower bounds are the best known ! Why should anyone take the time to think about this problem, besides its innocently simple statement and the challenge of the exponential gap in our knowledge? The reason is that this information theoretic problem encodes the computational difficulty of primality testing! Answering it is extremely important for computational number theory and theoretical computer science as follows:

[1]  Mauricio Karchmer,et al.  Communication complexity - a new approach to circuit depth , 1989 .

[2]  J. Hopcroft,et al.  Fast parallel matrix and GCD computations , 1982, FOCS 1982.

[3]  Noga Alon,et al.  The monotone circuit complexity of boolean functions , 1987, Comb..

[4]  Peter Frankl,et al.  Complexity classes in communication complexity theory , 1986, 27th Annual Symposium on Foundations of Computer Science (sfcs 1986).

[5]  Ravi B. Boppana,et al.  The Complexity of Finite Functions , 1991, Handbook of Theoretical Computer Science, Volume A: Algorithms and Complexity.

[6]  Paul E. Dunne,et al.  The Complexity of Boolean Networks , 1988 .

[7]  Ingo Wegener,et al.  The complexity of Boolean functions , 1987 .

[8]  Ran Raz,et al.  Probabilistic communication complexity of Boolean relations , 1989, 30th Annual Symposium on Foundations of Computer Science.

[9]  Claude E. Shannon,et al.  The synthesis of two-terminal switching circuits , 1949, Bell Syst. Tech. J..

[10]  Éva Tardos,et al.  The gap between monotone and non-monotone circuit complexity is exponential , 1988, Comb..

[11]  Ran Raz,et al.  Monotone circuits for matching require linear depth , 1990, STOC '90.

[12]  Leslie G. Valiant,et al.  Short Monotone Formulae for the Majority Function , 1984, J. Algorithms.

[13]  Avi Wigderson,et al.  Monotone Circuits for Connectivity Require Super-Logarithmic Depth , 1990, SIAM J. Discret. Math..

[14]  János Komlós,et al.  An 0(n log n) sorting network , 1983, STOC.

[15]  Leslie G. Valiant,et al.  Circuit Size is Nonlinear in Depth , 1976, Theor. Comput. Sci..

[16]  Andrew Chi-Chih Yao,et al.  Some complexity questions related to distributive computing(Preliminary Report) , 1979, STOC.

[17]  Bala Kalyanasundaram,et al.  The Probabilistic Communication Complexity of Set Intersection , 1992, SIAM J. Discret. Math..