Preferential Semantics for Goals

Goals, as typically conceived in AI planning, provide an insufficient basis for choice of action, and hence are deficient as the sole expression of an agent's objectives. Decision-theoretic utilities offer a more adequate basis, yet lack many of the computational advantages of goals. We provide a preferential semantics for goals that grounds them in decision theory and preserves the validity of some, but not all, common goal operations performed in planning. This semantic account provides a criterion for verifying the design of goal-based planning strategies, thus providing a new framework for knowledge-level analysis of planning systems.