Parallelism Increases Iterative Learning Power

Iterative learning($\textbf{It}$-learning) is a Gold-style learning model in which each of a learner's output conjectures may depend onlyupon the learner's currentconjecture and the currentinput element. Two extensions of the $\textbf{It}$-learning model are considered, each of which involves parallelism. The first is to run, in parallel, distinct instantiations of a single learner on each input element. The second is to run, in parallel, nindividual learners incorporating the first extension, and to allow the nlearners to communicate their results. In most contexts, parallelism is only a means of improving efficiency. However, as shown herein, learners incorporating the first extension are more powerful than $\textbf{It}$-learners, and, collectivelearners resulting from the second extension increase in learning power as nincreases. Attention is paid to how one would actually implement a learner incorporating each extension. Parallelism is the underlying mechanism employed.

[1]  Frank Stephan,et al.  Language Learning from Texts: Mind Changes, Limited Memory and Monotonicity (Extended Abstract). , 1995, COLT 1995.

[2]  Robert Nix,et al.  Editing by example , 1985, POPL '84.

[3]  Keith Wright Identification of unions of languages drawn from an identifiable class , 1989, COLT '89.

[4]  Heikki Mannila,et al.  MDL learning of unions of simple pattern languages from positive examples , 1995, EuroCOLT.

[5]  Arto Salomaa,et al.  Return to Patterns , 1995, Bull. EATCS.

[6]  E. Mark Gold,et al.  Language Identification in the Limit , 1967, Inf. Control..

[7]  T. Shinohara INFERRING UNIONS OF TWO PATTERN LANGUAGES , 1983 .

[8]  Robert Peter Nix Editing by example (text) , 1983 .

[9]  John Case,et al.  Periodicity in generations of automata , 1974, Mathematical systems theory.

[10]  John Case,et al.  Memory-Limited U-Shaped Learning , 2006, COLT.

[11]  John Case,et al.  U-shaped, iterative, and iterative-with-counter learning , 2007, Machine Learning.

[12]  John Case,et al.  Infinitary self-reference in learning theory , 1994, J. Exp. Theor. Artif. Intell..

[13]  Glynn Winskel,et al.  The formal semantics of programming languages - an introduction , 1993, Foundation of computing series.

[14]  Dana Angluin,et al.  Finding Patterns Common to a Set of Strings , 1980, J. Comput. Syst. Sci..

[15]  Arto Salomaa,et al.  The Formal Language Theory Column , 1987, Bull. EATCS.

[16]  John Case,et al.  Results on memory-limited U-shaped learning , 2007, Inf. Comput..

[17]  Rolf Wiehagen Limes-Erkennung rekursiver Funktionen durch spezielle Strategien , 1975, J. Inf. Process. Cybern..

[18]  Esko Ukkonen,et al.  Discovering Unbounded Unions of Regular Pattern Languages from Positive Examples (Extended Abstract) , 1996, ISAAC.

[19]  Setsuo Arikawa,et al.  Pattern Inference , 1995, GOSLER Final Report.

[20]  John Case,et al.  Incremental Concept Learning for Bounded Data Mining , 1997, Inf. Comput..

[21]  Elaine J. Weyuker,et al.  Computability, complexity, and languages - fundamentals of theoretical computer science , 2014, Computer science and applied mathematics.

[22]  Rolf Wiehagen,et al.  Polynomial-time inference of arbitrary pattern languages , 2009, New Generation Computing.

[23]  Thomas Zeugmann,et al.  Incremental Learning from Positive Data , 1996, J. Comput. Syst. Sci..

[24]  Kenneth W. Regan,et al.  Computability , 2022, Algorithms and Theory of Computation Handbook.

[25]  John Case,et al.  Machine Inductive Inference and Language Identification , 1982, ICALP.

[26]  John Case,et al.  Parallelism increases iterative learning power , 2007, Theoretical Computer Science.

[27]  Kenneth Wexler,et al.  Formal Principles of Language Acquisition , 1980 .

[28]  Ayumi Shinohara,et al.  Knowledge Acquisition from Amino Acid Sequences by Machine Learning System BONSAI , 1992 .

[29]  Jr. Hartley Rogers Theory of Recursive Functions and Effective Computability , 1969 .

[30]  Jeffrey Heinz,et al.  Inductive learning of phonotactic patterns , 2007 .

[31]  Robert H. Sloan,et al.  BOOK REVIEW: "SYSTEMS THAT LEARN: AN INTRODUCTION TO LEARNING THEORY, SECOND EDITION", SANJAY JAIN, DANIEL OSHERSON, JAMES S. ROYER and ARUN SHARMA , 2001 .

[32]  Frank Stephan,et al.  Language Learning from Texts: Mindchanges, Limited Memory, and Monotonicity , 1995, Inf. Comput..

[33]  Sanjay Jain,et al.  Open Problems in "Systems That Learn" , 1994, J. Comput. Syst. Sci..

[34]  Satoru Miyano,et al.  A machine discovery from amino acid sequences by decision trees over regular patterns , 1993, New Generation Computing.

[35]  Daniel N. Osherson,et al.  Systems That Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists , 1990 .

[36]  Carl H. Smith,et al.  Three Decades of Team Learning , 1994, AII/ALT.

[37]  John Case,et al.  Predictive Learning Models for Concept Drift , 1998, ALT.

[38]  John Case,et al.  U-Shaped, Iterative, and Iterative-with-Counter Learning , 2007, COLT.

[39]  Jeffrey Heinz,et al.  Learning Left-to-Right and Right-to-Left Iterative Languages , 2008, ICGI.