A condition guaranteeing the existence of higher-dimensional constrained Delaunay triangulations

Let X be a complex of vertices and piecewise linear constraining facets embedded in Ed. Say that a simplex is strongly Delaunay if its vertices are in X and there exists a sphere that passes through its vertices but passes through and encloses no other vertex. Then X has a d-dimensional constrained Delaunay triangulation if each k-dimensional constraining facet in X with k d 2 is a union of strongly Delaunay k-simplices. This theorem is especially useful in E3 for forming tetrahedralizations that respect specified planar facets. If the bounding segments of these facets are subdivided so that the subsegments are strongly Delaunay, then a constrained tetrahedralization exists. Hence, fewer vertices are needed than in the most common practice in the literature, wherein additional vertices are inserted in the relative interiors of facets to form a conforming (but unconstrained) Delaunay tetrahedralization.