Probabilistic computations: Toward a unified measure of complexity

1. Introduction The study of expected running time of algoritruns is an interesting subject from both a theoretical and a practical point of view. Basically there exist two approaches to this study. In the first approach (we shall call it the distributional approach), some "natural" distribution is assumed for the input of a problem, and one looks for fast algorithms under this assumption (see Knuth [8J). For example, in sorting n numbers, it is usually assumed that all n! initial orderings of the numbers are equally likely. A common criticism of this approach is that distributions vary a great deal in real life situations; fu.rthermore, very often the true distribution of the input is simply not known. An alternative approach which attempts to overcome this shortcoming by allowing stochastic moves in the computation has recently been proposed. This is the randomized approach made popular by Habin [lOJ(also see Gill[3J, Solovay and Strassen [13J), although the concept was familiar to statisticians (for exa'1lple, see Luce and Raiffa [9J). Note that by allowing stochastic moves in an algorithm, the input is effectively being randomized. We shall refer to such an algoritlvn as a randomized algorithm. These two approaches lead naturally to two different definitions of intrinsic complexity of a problem, which we term the distributional complexity and the randomized complexity, respectively. (Precise definitions and examples will be given in Sections 2 and 3.) To solidify the ideas, we look at familiar combinatorial problems that can be modeled by decision trees. In particular, we consider (a) the testing of an arbitrary graph property from an adjacency matrix (Section 2), and (b) partial order problems on n We will show that for these two classes of problems, the two complexity measures always agree by virtue of a famous theorem, the Minimax Theorem of Von Neumann [14J. The connection between the two approaches lends itself to applications. With two different views (and in a sense complementary to each other) on the complexity of a problem, it is frequently easier to derive upper and lower bounds. For example, using adjacency matrix representation for a graph, it can be shown that no randomized algorithm can determine 2 the existence of a perfect matching in less than O(n) probes. Such lower bounds to the randomized approach were lacking previously. As another example of application , we can prove that for the partial order problems in (b), assuming uniform …

[1]  J. Neumann Zur Theorie der Gesellschaftsspiele , 1928 .

[2]  Jeffrey D. Ullman,et al.  Formal languages and their relation to automata , 1969, Addison-Wesley series in computer science and information processing.

[3]  David Galer Kirkpatrick,et al.  Topics in the complexity of combinatorial algorithms. , 1974 .

[4]  John T. Gill,et al.  Computational complexity of probabilistic Turing machines , 1974, STOC '74.

[5]  F. Yao ON LOWER BOUNDS FOR SELECTION PROBLEMS , 1974 .

[6]  Ira Pohl Minimean optimality in sorting algorithms , 1975, 16th Annual Symposium on Foundations of Computer Science (sfcs 1975).

[7]  Ronald L. Rivest,et al.  Expected time bounds for selection , 1975, Commun. ACM.

[8]  Michael L. Fredman,et al.  How Good is the Information Theory Bound in Sorting? , 1976, Theor. Comput. Sci..

[9]  Ronald L. Rivest,et al.  On Recognizing Graph Properties from Adjacency Matrices , 1976, Theor. Comput. Sci..

[10]  Volker Strassen,et al.  A Fast Monte-Carlo Test for Primality , 1977, SIAM J. Comput..

[11]  Larry Carter,et al.  Universal Classes of Hash Functions , 1979, J. Comput. Syst. Sci..