Theoretical bounds on the complexity of inexact computations

This paper considers the reduction in algorithmic complexity that can be achieved by permitting approximate answers to computational problems. It is shown that Shannon's rate-distortion function could, under quite general conditions, provide lower bounds on the mean complexity of inexact computations. As practical examples of this approach, we show that partial sorting of N items, insisting on matching any nonzero fraction of the terms with their correct successors, requires O (N \log N) comparisons. On the other hand, partial sorting in linear time is feasible (and necessary) if one permits any finite fraction of pairs to remain out of order. It is also shown that any error tolerance below 50 percent can neither reduce the state complexity of binary N -sequences from the zero-error value of O(N) nor reduce the combinational complexity of N -variable Boolean functions from the zero-error level of O(2^{N}/N) .