Research in machine learning has typically addressed the problem of how and when to learn, and ignored the problem of formulating learning tasks in the first place. This paper addresses this issue in the context of the CASTLE system, 1 that dynamically formulates learning tasks for a given situation. Our approach utilizes an explicit model of the decision-making process to pinpoint which system component should be improved. CASTLE can then focus the learning process on the issues involved in improving the performance of the particular component. 1. Determining what to learn A theory of learning must ultimately address three issues: when to learn, what to learn, and how to learn. The overwhelming majority of research in machine learning has been concerned exclusively with the last of these questions, how to learn. This ranges from work in purely inductive category formation to more knowledge-based approaches. The aim of this work has generally been to develop and explore algorithms for generalizing or specializing category definitions. The nature of the categories being defined—i.e., what is being learned—is rarely a consideration in the development of these algorithms. For purely inductive approaches, this is entirely a matter of the empirical data that serves as input to the learner. In explanation-based approaches (EBL), it is a matter of the user-defined "goal concept"—in other words, input of another sort. In neither case is the formulation of the learning task itself taken to be within the purview of the model under development. Some work—in particular that in which learning has been addressed within the context of performing a task—has addressed the first question above, namely, when to learn. A common approach to this issue, known as failure-driven learning, is based on the idea that a system should learn in response to performance failures. The direct connection this The research presented here was carried out at The Institute for the Learning Sci ences at Northwestern University, and is di scussed in detail in the first author' s Ph.D. thesis [Krulwich, 1993]. ' CASTLE stands for Concocting Abstract Strategi es Through Learning from Expectation-failures. establishes between learning and task performance has made this approach among the most widespread in learning to plan For the most part, however, even these models do not address the second question above, what to learn. In many cases, this is because the models are only capable of learning one type of lesson. What to learn …
[1]
Rüdiger Oehlmann,et al.
Learning Plan Transformations from Self-Questions: A Memory-Based Approach
,
1993,
AAAI.
[2]
Ernest Davis,et al.
Representations of commonsense knowledge
,
2014,
notThenot Morgan Kaufmann series in representation and reasoning.
[3]
Bruce Krulwich.
Flexible learning in a multi-component planning system
,
1993
.
[4]
Matthew W. Lewis,et al.
Self-Explonations: How Students Study and Use Examples in Learning to Solve Problems
,
1989,
Cogn. Sci..
[5]
Derek Partridge,et al.
Surprisingness and Expectation Failure: What's the Difference?
,
1987,
IJCAI.
[6]
Roger C. Schank,et al.
Question-driven understanding: an integrated theory of story understanding, memory and learning
,
1989
.
[7]
A. Newell.
Unified Theories of Cognition
,
1990
.
[8]
Steve Ankuo Chien.
An explanation-based learning approach to incremental planning
,
1991
.
[9]
David Leake,et al.
Using Introspective Reasoning to Guide Index Refinement in Case-Based Reasoning
,
2019,
Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society.
[10]
Ashwin Ram,et al.
Multistrategy Learning with Introspective Meta-Explanations
,
1992,
ML.
[11]
Kristian J. Hammond,et al.
Case-Based Planning: Viewing Planning as a Memory Task
,
1989
.
[12]
David Atkinson,et al.
Generating Perception Requests and Expectations to Verify the Execution of Plans
,
1986,
AAAI.
[13]
Tom M. Mitchell,et al.
Becoming Increasingly Reactive
,
1990,
AAAI.
[14]
Gerald Jay Sussman,et al.
A Computer Model of Skill Acquisition
,
1975
.
[15]
Ashwin Ram,et al.
Goal-Driven Learning: Fundamental Issues: A Symposium Report
,
1993,
AI Mag..
[16]
Jaime G. Carbonell,et al.
Learning effective search control knowledge: an explanation-based approach
,
1988
.
[17]
Steven Minton,et al.
Constraint-Based Generalization: Learning Game-Playing Plans From Single Examples
,
1984,
AAAI.
[18]
Gerald DeJong,et al.
On Integrating Machine Learning with Planning
,
1993
.
[19]
Michael Freed,et al.
Plan Debugging in an Intentional System
,
1991,
IJCAI.
[20]
Michael Freed,et al.
Model-Based Diagnosis of Planning Failures
,
1990,
AAAI.
[21]
Michael Freed,et al.
The role of self-models in learning to plan
,
1993
.
[22]
Lawrence Hunter,et al.
Knowledge acquisition planning: gaining expertise through experience
,
1989
.