A model of problem solving is given in terms of problem information, problem features, solution sets, solution selection forms, performance measures and performance requirements. Learning is then defined in terms of identifying problem features and selection forms. The selection forms include, for example, mathematical formulas, sets of rules, decision trees, exemplars, and neural networks. These forms have parameters that can be optimized and an additional aspect of learning is this optimization. Several examples are given and teaching is defined in this context. Lessons learned from the theories of optimization and approximation about learning to solve problems are summarized. The potential that supercomputers plus superoptimizers might create magic in learning how to solve problems is analyzed and the difficulties faced are discussed. There are closing comments on automatically computing geometrical features and rule sets.
[1]
John R. Rice,et al.
ATHENA: A Knowledge Base System for //ELLPACK
,
1990
.
[2]
Andrew B. Whinston,et al.
Manager's Guide to Expert Systems Using Guru
,
1986
.
[3]
K. L. Hiebert.
An Evaluation of Mathematical Software That Solves Systems of Nonlinear Equations
,
1982,
TOMS.
[4]
Theodore F. Elbert,et al.
Estimation and control of systems
,
1984
.
[5]
A. L. Samuel,et al.
Some Studies in Machine Learning Using the Game of Checkers
,
1967,
IBM J. Res. Dev..
[6]
R. Geoff Dromey,et al.
An algorithm for the selection problem
,
1986,
Softw. Pract. Exp..
[7]
John R. Rice,et al.
Solving elliptic problems using ELLPACK
,
1985,
Springer series in computational mathematics.
[8]
John R. Rice,et al.
//ELLPACK: a numerical simulation programming environment for parallel MIMD machines
,
1990,
ICS '90.