Learning About Systems That Contain State Variables

It is difficult to learn about systems that contain state variables when those variables are not directly observable. This paper formalizes this learning problem and presents a method called the iterative extension method for solving it. In the iterative extension method, the learner gradually constructs a partial theory of the state-containing system. At each stage, the learner applies this partial theory to interpret the I/O behavior of the system and obtain additional constraints on the structure and values of its state variables. These constraints can be applied to extend the partial theory by hypothesizing additional internal state variables. The improved theory can then be applied to interpret more complex I/O behavior. This process continues until a theory of the entire system is obtained. Several sufficient conditions for the success of this method are presented including (a) the observability and decomposability of the state information in the system, (b) the learnability of individual state transitions in the system, (c) the ability of the learner to perform synthesis of straight-line programs and conjunctive predicates from examples and (d) the ability of the learner to perform theory-driven data interpretation. The method is being implemented and applied to the problem of learning UNIX file system commands by observing a tutorial interaction with UNIX.

[1]  Steve Hardy,et al.  Synthesis Of LISP Functions From Examples , 1975, IJCAI.

[2]  Earl David Sacerdoti,et al.  A Structure for Plans and Behavior , 1977 .

[3]  Ryszard S. Michalski,et al.  A Theory and Methodology of Inductive Learning , 1983, Artificial Intelligence.

[4]  Gerald J. Sussman,et al.  Forward Reasoning and Dependency-Directed Backtracking in a System for Computer-Aided Circuit Analysis , 1976, Artif. Intell..

[5]  Jerome A. Feldman,et al.  On the Synthesis of Finite-State Machines from Samples of Their Behavior , 1972, IEEE Transactions on Computers.

[6]  Kurt VanLehn,et al.  Felicity conditions for human skill acquisition: validating an ai-based theory , 1983 .

[7]  Howard E. Shrobe,et al.  Initial Report on a Lisp Programmer's Apprentice , 1978, IEEE Transactions on Software Engineering.

[8]  Tom Michael Mitchell Version spaces: an approach to concept learning. , 1979 .

[9]  Laurent Siklóssy,et al.  Automatic Program Synthesis from Example Problems , 1975, IJCAI.

[10]  M. Bauer,et al.  A Basis For The Acquisition Of Procedures From Protocols , 1975, IJCAI.

[11]  David E. Shaw,et al.  Inferring LISP Programs From Examples , 1975, IJCAI.

[12]  Patrick Henry Winston,et al.  Learning structural descriptions from examples , 1970 .

[13]  Ryszard S. Michalski,et al.  On the Quasi-Minimal Solution of the General Covering Problem , 1969 .

[14]  J. Ross Quinlan,et al.  Learning Efficient Classification Procedures and Their Application to Chess End Games , 1983 .

[15]  Alan W. Biermann,et al.  Constructing Programs from Example Computations , 1976, IEEE Transactions on Software Engineering.

[16]  Alan W. Biermann,et al.  On the Inference of Turing Machines from Sample Computations , 1972, Artif. Intell..

[17]  Donald A. Norman,et al.  The trouble with unix , 1981 .

[18]  Gerald Jay Sussman,et al.  A Computer Model of Skill Acquisition , 1975 .

[19]  Paul E. Utgoff,et al.  Adjusting Bias in Concept Learning , 1983, IJCAI.