Dynamic data structures for randomized algorithms that use sampling

This dissertation presents a new technique for transforming algorithms that use random sampling into dynamic algorithms. We design a dynamic data structure for storing the input to the algorithm, which may be completely or partially rebuilt when the input is updated. Rebuilding is independent of the value of the new input; rather, the function used in the random sampling is used in determining when the data structure is rebuilt. Our technique is simple, general, and can be applied to a wide range of data structures. It yields polylogarithmic expected time bounds on updates for several classes of data structures and algorithms. A well known problem of randomized algorithms is that they do not always attain their expected time bounds. We address this problem by developing a method for transforming expected time bounds to high likelihood bounds. We do this by using multiple independent processes, which we call replicants, each maintaining its own dynamic data structure. We describe a general framework for determining how many replicants are necessary to ensure high likelihood time bounds, given the algorithm time bounds and sampling function. To test our methods, we applied them to a randomized binary search tree, using two different models of our data structure. The empirical results from our testing were consistent with, and in some cases better than, the theoretically predicted bounds. Our code is modular, and the binary search tree module can be replaced by a code module for another data structure. We applied our methods to the problem of finding sphere separators for a neighborhood system and its induced graph. Using our methods, sphere separators can be maintained with an expected update time of O(log n), and a high likelihood update time of O(log$\sp3\ n)$ Moreover, a separator decomposition tree can be maintained in O(log$\sp3\ n)$ time per update. These are the best know results for dynamic maintenance of sphere separators. An important application of graph separators is nested dissection, a widely used technique for solving sparse linear systems. We present here some new results improving the space and time bounds of parallel implementations of nested dissection.