On Learning Languages from Positive Data and a Limited Number of Short Counterexamples

We consider two variants of a model for learning languages in the limit from positive data and a limited number of short negative counterexamples (counterexamples are considered to be short if they are smaller that the largest element of input seen so far). Negative counterexamples to a conjecture are examples which belong to the conjectured language but do not belong to the input language. Within this framework, we explore how/when learners using n short (arbitrary) negative counterexamples can be simulated (or simulate) using least short counterexamples or just ‘no’ answers from a teacher. We also study how a limited number of short counterexamples fairs against unconstrained counterexamples. A surprising result is that just one short counterexample (if present) can sometimes be more useful than any bounded number of counterexamples of least size. Most of results exhibit salient examples of languages learnable or not learnable within corresponding variants of our models.

[1]  Steffen Lange,et al.  Algorithmic Learning for Knowledge-Based Systems , 1995, Lecture Notes in Computer Science.

[2]  Rolf Wiehagen,et al.  Ignoring data may be the only way to learn efficiently , 1994, J. Exp. Theor. Artif. Intell..

[3]  Patrick Brézillon,et al.  Lecture Notes in Artificial Intelligence , 1999 .

[4]  Daniel N. Osherson,et al.  Criteria of Language Learning , 1982, Inf. Control..

[5]  Mark A. Fulk Prudence and Other Conditions on Formal Language Learning , 1990, Inf. Comput..

[6]  Daniel N. Osherson,et al.  Systems That Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists , 1990 .

[7]  Sanjay Jain,et al.  Learning languages from positive data and negative counterexamples , 2008, J. Comput. Syst. Sci..

[8]  D. Angluin Queries and Concept Learning , 1988 .

[9]  Manuel Blum,et al.  Toward a Mathematical Theory of Inductive Inference , 1975, Inf. Control..

[10]  John Case,et al.  Machine Inductive Inference and Language Identification , 1982, ICALP.

[11]  Sandra Zilles,et al.  Replacing Limit Learners with Equally Powerful One-Shot Query Learners , 2004, COLT.

[12]  Tatsuya Motoki,et al.  Inductive Inference from all Positive and Some Negative Data , 1991, Inf. Process. Lett..

[13]  Thomas Zeugmann,et al.  A Guided Tour Across the Boundaries of Learning Recursive Languages , 1995, GOSLER Final Report.

[14]  Robin Milner,et al.  On Observing Nondeterminism and Concurrency , 1980, ICALP.

[15]  Jr. Hartley Rogers Theory of Recursive Functions and Effective Computability , 1969 .

[16]  John Case,et al.  Language Learning with Some Negative Information , 1993, J. Comput. Syst. Sci..

[17]  John Case,et al.  Comparison of Identification Criteria for Machine Inductive Inference , 1983, Theor. Comput. Sci..

[18]  W. Gasarch,et al.  Bounded Queries in Recursion Theory , 1998 .

[19]  E. Mark Gold,et al.  Language Identification in the Limit , 1967, Inf. Control..

[20]  Sandra Zilles,et al.  Comparison of Query Learning and Gold-Style Learning in Dependence of the Hypothesis Space , 2004, ALT.