Flexible learning in a multi-component planning system

People are able to learn a wide range of lessons from a given experience, depending on which of their cognitive abilities seem to need improvement. A theory of learning to plan should account for how and why an intelligent agent can learn such a diversity of lessons. Such a theory must address not only how and when an agent learns, but also what the agent should learn, because lessons must be formulated appropriately for the skills being improved. This thesis argues that a machine learning system must be able to dynamically determine what to learn from an experience. Making this determination requires that the system posses explicit knowledge of its own decision-making procedures, and be able to apply this knowledge in learning. This thesis describes the scCASTLE system, which learns new rules for a variety of cognitive tasks in the domain of competitive games, in particular chess. C scASTLE's tasks include detection of threats and opportunities, plan recognition, goal generation, planning, counterplanning, and plan selection. C scASTLE uses knowledge of its planning procedures to determine which of its decision-making components are responsible for expectation failures, and uses an abstract model of planning to appropriately formulate new rules.