Lyapunov methods for safe intelligent agent design

In the many successful applications of artificial intelligence (AI) methods to real-world problems in domains such as medicine, commerce, and manufacturing, the AI system usually plays an advisory or monitoring role. That is, the AI system provides information to a human decision-maker, who has the final say. However, for applications ranging from space exploration, to e-commerce, to search and rescue missions, there is an increasing need and desire for AI systems that display a much greater degree of autonomy. In designing autonomous AI systems, or agents, issues concerning safety, reliability, and robustness become critical. Does the agent observe appropriate safety constraints? Can we provide performance or goal-achievement guarantees? Does the agent deliberate and/or learn efficiently and in real time? In this dissertation, we address some of these issues by developing an approach to agent design that integrates control-theoretic techniques, primarily methods based on Lyapunov functions, with planning and learning techniques from AI. Our main approach is to use control-theoretic domain knowledge to formulate, or restrict, the ways in which the agent can interact with its environment. This approach allows one to construct agents that enjoy provable safety and performance guarantees, and that reason and act in real-time or anytime fashion. Because the guarantees are established based on restrictions on the agent's behavior, specialized “safety-oriented” decision-making algorithms are not necessary. Agents can reason using standard AI algorithms; we discuss state-space search and reinforcement learning agents in detail. To a limited degree, we also show that the control-theoretic domain knowledge needed to ensure safe agent behavior can itself be learned by the agent, and need not be known a priori. We demonstrate our theory with simulation experiments on standard problems from robotics and control.