We will describe recent developments in a system for machine learning that we’ve been working on for some time (Sol 86, Sol 89). It is meant to be a \Scientist’s Assistant" of great power and versatility in many areas of science and mathematics. It difiers from other ambitious work in this area in that we are not so much interested in knowledge itself, as we are in how it is acquired - how machines may learn. To start ofi, the system will learn to solve two very general kinds of problems. Most, but perhaps not all problems in science and engineering are of these two kinds. The flrst kind is Function Inversion. These are the P and NP problems of computational complexity theory. They include theorem proving, solution of equations, symbolic integration, etc. The second kind of problem is Time Limited Optimization. Inductive inference of all kinds, surface reconstruction, and image restoration are a few examples of this kind of problem. Designing an automobile in 6 months satisfying certain speciflcations and having minimal cost, is another. In the following discussion, we will be using the term \Probability" in a special sense: i.e. the estimate given by the best probabilistic model for the available data that we can flnd in the available time. Our system starts out with a small set of Problem Solving Techniques (PSTs) and a simple General Conditional Probability Distribution (GCPD). When the system is given a problem, the description of this problem is the \Condition" for the GCPD. Its output is a probability distribution on PSTs - the likelihood that each of them will solve the problem by time t. It uses these PSTs and their associated probability distributions to solve the flrst problem. Next, it executes its Update Algorithm: The PSTs are modifled, new ones may be added, some may be deleted. The GCPD is modifled. These
[1]
Ray J. Solomonoff,et al.
Complexity-based induction systems: Comparisons and convergence theorems
,
1978,
IEEE Trans. Inf. Theory.
[2]
Nichael Lynn Cramer,et al.
A Representation for the Adaptive Generation of Simple Sequential Programs
,
1985,
ICGA.
[3]
Ray J. Solomonoff,et al.
The Application of Algorithmic Probability to Problems in Artificial Intelligence
,
1985,
UAI.
[4]
R. Solomonoff.
A SYSTEM FOR INCREMENTAL LEARNING BASED ON ALGORITHMIC PROBABILITY
,
1989
.
[5]
Ray J. Solomonofi,et al.
A SYSTEM FOR INCREMENTAL LEARNING BASED ON ALGORITHMIC PROBABILITY
,
1989
.
[6]
Douglas B. Lenat,et al.
CYC: a large-scale investment in knowledge infrastructure
,
1995,
CACM.
[7]
Wolfgang J. Paul,et al.
Autonomous theory building systems
,
1995,
Ann. Oper. Res..
[8]
Wolfgang Banzhaf,et al.
Genetic Programming: An Introduction
,
1997
.
[9]
Paul M. B. Vitányi,et al.
An Introduction to Kolmogorov Complexity and Its Applications
,
1993,
Graduate Texts in Computer Science.
[10]
Marcus Hutter,et al.
Optimality of Universal Bayesian Sequence Prediction for General Loss and Alphabet
,
2003,
J. Mach. Learn. Res..
[11]
Jürgen Schmidhuber,et al.
Optimal Ordered Problem Solver
,
2002,
Machine Learning.