A Reducibility Concept for Problems Defined in Terms of Ordered Binary Decision Diagrams

1 I n t r o d u c t i o n Reducibility is one of the most basic notions in complexity theory. I t provides a fundamental tool for comparing the computat ional complexity of different problems. The key idea is to use a program for a device tha t solves one problem as a subroutine within the computat ion of another problem /7. If this is possible, /7 is said to be reducible to Z . Reductions provide the possiblity to conclude upper bound results on the computat ional complexity of p r o b l e m / 7 and lower bounds for Z , if one insists tha t the program f o r / 7 designed around the subroutine for Z respects certain resource complexity bounds of interest. In the past , a great variety of different reducibility notions has been invest igated in order to get a bet ter understanding of the different computat ional paradigms and /o r resource bounds. Here we only mention polynomialt ime Turing reducibility, log-space reducibility, polynomial projection reducibility, and NCl-reducibi l i ty (see, e.g., [Lee90], [BDG88]). This great variety of different reducibility notions is a consequence of the fact tha t the computat ional power available to the reduction must not be stronger than the computat ional power of the complexity class under consideration. Otherwise, the possibility of hiding * We are grateful to DAAD ACCIONES INTEGRADAS, grant Nr. 322-ai-e-dr Reischuk, Morvan (Eds.): STACS'97 Proceedings, LNCS 1200 (~) Springer-Verlag Berlin Heidelberg 1997 214 C. Meinel and A. Slobodov~ some essential computations within the reduction is a threat to the relevance of the obtained results. For example, polynomial-time reducibility does not give any insight into the computational complexity of logarithmic-space bounded computations. The computational power implementable in a reducibility notion for complexity classes defined in terms of very restricted computational models (e.g., eraser Turing machines [KMW88], real-time branching programs [KW87], or ordered binary decision diagrams [Bry92]) becomes extremely limited, since in general, almost all of the resources are consumed already by the programs which are used as subroutines in the reductions. Hence, the traditional approach results in reducibility concepts which enable to relate merely highly similar problems (e.g., [BW96]). Since complexity classes defined by such restricted models are interesting for the theory they occur in connection with our limited abilities in proving lower bounds [e.g., KMW88, KW87, Mei89] and of practical importance ordered binary decision diagrams are the state of the art data structure for computer aided circuit design [Bry92, BCMD, Bry95] it is highly desirable to develop more powerful reducibility concepts. Here, we consider the case of complexity classes defined by ordered binary decision diagrams (OBDDs), (i.e. read-once binary decision diagrams with a fixed variable ordering). We attempt to overcome the difficulties mentioned above by introducing a new reducibility concept that is based on the following idea: A problem H is reducible to a problem Z if an OBDD for H can be constructed from a given OBDD for ~ by applying a sequence of elementary operations (here 'elementary' means 'performable in constant time'). In contrast to previous reducibility notions the suggested one is able to reflect the real needs of a reducibility concept in the context of OBDD-based complexity classes: Firstly, it allows to reduce those problems to each other which are computable with the same amount of OBDD-resources, and, secondly, it allows to carry over lower and upper bounds. Although well-motivated, a reducibility based on sequences of elementary operations is difficult to describe and to handle since it has to deal with permanently changing OBDDs. For this reason, we develop a formalism which allows a more 'static' description in terms of the so-called OBDD-transformer. We prove that the size of an OBDD which is obtained by the application of a sequence of elementary operations can be estimated in terms of the sizes of the original OBDD and of the corresponding OBDD-transformer. Hence, the formalism gives a solid basis for complexity theoretic investigations. 2 N o t a t i o n s and P r e l i m i n a r i e s Let X~ denote the set {xx, x2,..., xn ) of Boolean variables. A variable ordering on Xn is a total order on X~ and is described by a permutation of the index set I~ = {1,...,n), i.e. xi < xj iff ~r-l(i) < ~r-l(j). Throughout the paper, we will work with the extension of an ordering to constants false and true which are defined to be maximal (false and true become incomparable). Identity defines the so-called natural ordering. Two orderings (possibly defined on different variable sets) are