Learning approximate plans for use in the real world

ABSTRACT Current artificial intelligence systems have difficulty functioning in real-world environments. These systems make many implicit assumptions about the world which, if inaccurate, will cause them to fail. For systems to function in these environments requires that explicit approximations be used, that approximation failures be detectable, and that the system has some method for recovering from failures. An architecture for learning approximate plans is introduced based on explanation-based learning. This technique allows approximate plans to be learned from observation of a single example of a goal achievement. An example is given illustrating how the approximation architecture embodied in a system called GRASPER is able to learn an approximate uncertainty-tolerant plan for grasping a block in the robotics domain.