Learning in the presence of additional information and inaccurate information

Inductive inference machines (IIMs) model language and scientific learning. In the classical model, a machine attempts to construct an explanation about a phenomenon as it is receiving data about that phenomenon (Gol67; CS83; OSW86b). The machine is said to be successful if it ultimately succeeds in explaining the phenomenon. This is a naive model of science. For one thing, a scientist has more information available than just the result of experiments. For example, a scientist may have some knowledge about the complexity of the phenomenon he (she) is trying to learn. For another, the result of the scientist's investigation need not be the final theory. Finally, a scientist may already have some approximate explanation of the phenomenon. The study of such additional information constitutes the first part of this thesis. In the real world our input is rarely free of error. Inputs usually suffer from noise and missing data. The study of different notions of such inaccuracies in the input is the focus of the second part of this thesis.