Progress In Incremental Machine Learning

We will describe recent developments in a system for machine learning that we’ve been working on for some time (Sol 86, Sol 89). It is meant to be a \Scientist’s Assistant" of great power and versatility in many areas of science and mathematics. It difiers from other ambitious work in this area in that we are not so much interested in knowledge itself, as we are in how it is acquired - how machines may learn. To start ofi, the system will learn to solve two very general kinds of problems. Most, but perhaps not all problems in science and engineering are of these two kinds. The flrst kind is Function Inversion. These are the P and NP problems of computational complexity theory. They include theorem proving, solution of equations, symbolic integration, etc. The second kind of problem is Time Limited Optimization. Inductive inference of all kinds, surface reconstruction, and image restoration are a few examples of this kind of problem. Designing an automobile in 6 months satisfying certain speciflcations and having minimal cost, is another. In the following discussion, we will be using the term \Probability" in a special sense: i.e. the estimate given by the best probabilistic model for the available data that we can flnd in the available time. Our system starts out with a small set of Problem Solving Techniques (PSTs) and a simple General Conditional Probability Distribution (GCPD). When the system is given a problem, the description of this problem is the \Condition" for the GCPD. Its output is a probability distribution on PSTs - the likelihood that each of them will solve the problem by time t. It uses these PSTs and their associated probability distributions to solve the flrst problem. Next, it executes its Update Algorithm: The PSTs are modifled, new ones may be added, some may be deleted. The GCPD is modifled. These