We consider a variant of Gold's learning paradigm where a learner receives as input n different languages (in form of one text where all input languages are interleaved). Our goal is to explore the situation when a more coarse classification of input languages is possible, whereas more refined classification is not. More specifically, we answer the following question: under which conditions, a learner, being fed n different languages, can produce m grammars covering all input languages, but cannot produce k grammars covering input languages for any k > m. We also consider a variant of this task, where each of the output grammars may not cover more than r input languages. Our main results indicate that the major factor affecting classification capabilities is the difference n - m between the number n of input languages and the number m of output grammars. We also explore relationship between classification capabilities for smaller and larger groups of input languages. For the variant of our model with the upper bound on the number of languages allowed to be represented by one output grammar, for classes consisting of disjoint languages, we found complete picture of relationship between classification capabilities for different parameters n (the number of input languages), m (number of output grammars), and r (bound on the number of languages represented by each output grammar). This picture includes a combinatorial characterization of classification capabilities for the parameters n, m, r of certain types.
[1]
John Case,et al.
Machine Inductive Inference and Language Identification
,
1982,
ICALP.
[2]
Sanjay Jain,et al.
Learning languages in a union
,
2007,
J. Comput. Syst. Sci..
[3]
Sanjay Jain,et al.
Learning multiple languages in groups
,
2007,
Theor. Comput. Sci..
[4]
Manuel Blum,et al.
A Machine-Independent Theory of the Complexity of Recursive Functions
,
1967,
JACM.
[5]
E. Mark Gold,et al.
Language Identification in the Limit
,
1967,
Inf. Control..
[6]
S. Pinker.
Formal models of language learning
,
1979,
Cognition.
[7]
John Case,et al.
Comparison of Identification Criteria for Machine Inductive Inference
,
1983,
Theor. Comput. Sci..
[8]
Hartley Rogers,et al.
Gödel numberings of partial recursive functions
,
1958,
Journal of Symbolic Logic.
[9]
Jr. Hartley Rogers.
Theory of Recursive Functions and Effective Computability
,
1969
.
[10]
Daniel N. Osherson,et al.
Systems That Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists
,
1990
.
[11]
Paul Young,et al.
An introduction to the general theory of algorithms
,
1978
.
[12]
Kenneth Wexler,et al.
Formal Principles of Language Acquisition
,
1980
.
[13]
Daniel N. Osherson,et al.
Criteria of Language Learning
,
1982,
Inf. Control..