Unifying logic, topology and learning in Parametric logic

Many connections have been established between learning and logic, or learning and topology, or logic and topology. Still, the connections are not at the heart of these fields. Each of them is fairly independent of the others when attention is restricted to basic notions and main results. We show that connections can actually be made at a fundamental level, and result in a logic with parameters that needs topological notions for its early developments, and notions from learning theory for interpretation and applicability.One of the key properties of first-order logic is that the classical notion of logical consequence is compact. We generalize the notion of logical consequence, and we generalize compactness to β-weak compactness where β is an ordinal. The effect is to stratify the set of generalized logical consequences of a theory into levels, and levels into layers. Deduction corresponds to the lower layer of the first level above the underlying theory, learning with less than β mind changes to layer β of the first level, and learning in the limit to the first layer of the second level. Refinements of Borel-like hierarchies provide the topological tools needed to develop the framework.

[1]  Shan-Hwei Nienhuys-Cheng,et al.  Foundations of Inductive Logic Programming , 1997, Lecture Notes in Computer Science.

[2]  Frank Stephan,et al.  On one-sided versus two-sided classification , 2001, Arch. Math. Log..

[3]  Carl H. Smith,et al.  On the role of procrastination for machine learning , 1992, COLT '92.

[4]  H. Keisler Fundamentals of Model Theory , 1977 .

[5]  Carl H. Smith,et al.  On the Role of Procrastination in Machine Learning , 1993, Inf. Comput..

[6]  Andris Ambainis,et al.  Ordinal Mind Change Complexity of Language Identification , 1997, EuroCOLT.

[7]  autoepistemic Zogic Logic programming and negation : a survey , 2001 .

[8]  William I. Gasarch,et al.  Classification using information , 1994, Annals of Mathematics and Artificial Intelligence.

[9]  H. Keisler,et al.  Handbook of mathematical logic , 1977 .

[10]  Frank Stephan,et al.  Counting Extensional Differences in BC-Learning , 2000, ICGI.

[11]  Daniel N. Osherson,et al.  Elements of Scientific Inquiry , 1998 .

[12]  Dana Angluin,et al.  Inductive Inference of Formal Languages from Positive Data , 1980, Inf. Control..

[13]  Arun Sharma,et al.  Learning in Logic with RichProlog , 2002, ICLP.

[14]  Y. Ershov A hierarchy of sets. I , 1968 .

[15]  E. Mark Gold,et al.  Language Identification in the Limit , 1967, Inf. Control..

[16]  Kees Doets,et al.  From logic to logic programming , 1994, Foundations of computing series.

[17]  M. Kendall,et al.  The Logic of Scientific Discovery. , 1959 .

[18]  Gary James Jason,et al.  The Logic of Scientific Discovery , 1988 .

[19]  M. Makkai Admissible Sets and Infinitary Logic , 1977 .

[20]  Carl H. Smith,et al.  Inductive Inference with Procrastination: Back to Definitions , 1999, Fundam. Informaticae.

[21]  A. Kechris Classical descriptive set theory , 1987 .

[22]  P. Odifreddi Classical recursion theory , 1989 .

[23]  Kevin T. Kelly The Logic of Reliable Inquiry , 1996 .

[24]  Witold Łukaszewicz Non-monotonic reasoning : formalization of commonsense reasoning , 1990 .

[25]  Arun Sharma,et al.  A General Theory of Deduction, Induction, and Learning , 2001, Discovery Science.