Language Learning with Some Negative Information

Gold-style language learning is a formal theory of learning from examples by algorithmic devices called learning machines. Originally motivated by child language learning, it features the algorithmic synthesis (in the limit) of grammars for formal languages from information about those languages. In traditional Gold-style language learning, learning machines are not provided with negative information, i.e., information about the complements of the input languages. We investigate two approaches to providing small amounts of negative information and demonstrate in each case a strong resulting increase in learning power. Finally, we show that small packets of negative information also lead to increased speed of learning. This result agrees with a psycholinguistic hypothesis of McNeill correlating the availability of parental expansions with the speed of child language development.

[1]  Arun Sharma,et al.  Learning in the Presence of Partial Explanations , 1991, Inf. Comput..

[2]  元木 達也 INDUCTIVE INFERENCE FROM ALL POSITIVE AND SOME NEGATIVE DATA , 1991 .

[3]  Mark A. Fulk Robust separations in inductive inference , 1990, Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science.

[4]  Daniel N. Osherson,et al.  Systems That Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists , 1990 .

[5]  Mark A. Fulk Prudence and Other Conditions on Formal Language Learning , 1990, Inf. Comput..

[6]  John Case The power of vacillation , 1988, COLT '88.

[7]  M. W. Shields An Introduction to Automata Theory , 1988 .

[8]  Elliott Mendelson,et al.  Introduction to mathematical logic (3. ed.) , 1987 .

[9]  Mark A. Fulk A study of inductive inference machines , 1986 .

[10]  篠原 武 Studies on inductive inference from positive data , 1986 .

[11]  D. Osherson,et al.  Learning theory and natural language , 1984, Cognition.

[12]  John Case,et al.  Comparison of Identification Criteria for Machine Inductive Inference , 1983, Theor. Comput. Sci..

[13]  John Case,et al.  Machine Inductive Inference and Language Identification , 1982, ICALP.

[14]  Daniel N. Osherson,et al.  Ideal Learning Machines , 1982, Cogn. Sci..

[15]  Daniel N. Osherson,et al.  Criteria of Language Learning , 1982, Inf. Control..

[16]  On extensional learnability , 1982, Cognition.

[17]  D. Osherson,et al.  A note on formal learning theory , 1982, Cognition.

[18]  Dana Angluin,et al.  Inductive Inference of Formal Languages from Positive Data , 1980, Inf. Control..

[19]  Kenneth Wexler,et al.  Formal Principles of Language Acquisition , 1980 .

[20]  S. Pinker Formal models of language learning , 1979, Cognition.

[21]  Elliott Mendelson,et al.  Introduction to Mathematical Logic , 1979 .

[22]  Jeffrey D. Ullman,et al.  Introduction to Automata Theory, Languages and Computation , 1979 .

[23]  Paul Young,et al.  An introduction to the general theory of algorithms , 1978 .

[24]  Manuel Blum,et al.  Toward a Mathematical Theory of Inductive Inference , 1975, Inf. Control..

[25]  Albert S. Bregman,et al.  Imagery and language acquisition , 1973 .

[26]  Albert S. Bregman,et al.  The role of reference in the acquisition of a miniature artificial language , 1972 .

[27]  Jr. Hartley Rogers Theory of Recursive Functions and Effective Computability , 1969 .

[28]  E. Mark Gold,et al.  Language Identification in the Limit , 1967, Inf. Control..

[29]  Manuel Blum,et al.  A Machine-Independent Theory of the Complexity of Recursive Functions , 1967, JACM.

[30]  Hartley Rogers,et al.  Gödel numberings of partial recursive functions , 1958, Journal of Symbolic Logic.