Learning approximate plans for use in the real world
暂无分享,去创建一个
ABSTRACT Current artificial intelligence systems have difficulty functioning in real-world environments. These systems make many implicit assumptions about the world which, if inaccurate, will cause them to fail. For systems to function in these environments requires that explicit approximations be used, that approximation failures be detectable, and that the system has some method for recovering from failures. An architecture for learning approximate plans is introduced based on explanation-based learning. This technique allows approximate plans to be learned from observation of a single example of a goal achievement. An example is given illustrating how the approximation architecture embodied in a system called GRASPER is able to learn an approximate uncertainty-tolerant plan for grasping a block in the robotics domain.
[1] Steve Ankuo Chien. Simplifications in Temporal Persistence: An Approach to the Intractable Domain Theory Problem in Explanation-Based Learning , 1987 .
[2] Richard J. Doyle,et al. Constructing and Refining Causal Explanations from an Inconsistent Domain Theory , 1986, AAAI.
[3] Richard M. Keller,et al. Concept Learning in Context , 1987 .