An Efficient, Probabilistically Sound Algorithm for Segmentation and Word Discovery

This paper presents a model-based, unsupervised algorithm for recovering word boundaries in a natural-language text from which they have been deleted. The algorithm is derived from a probability model of the source that generated the text. The fundamental structure of the model is specified abstractly so that the detailed component models of phonology, word-order, and word frequency can be replaced in a modular fashion. The model yields a language-independent, prior probability distribution on all possible sequences of all possible words over a given alphabet, based on the assumption that the input was generated by concatenating words from a fixed but unknown lexicon. The model is unusual in that it treats the generation of a complete corpus, regardless of length, as a single event in the probability space. Accordingly, the algorithm does not estimate a probability distribution on words; instead, it attempts to calculate the prior probabilities of various word sequences that could underlie the observed text. Experiments on phonemic transcripts of spontaneous speech by parents to young children suggest that our algorithm is more effective than other proposed algorithms, at least when utterance boundaries are given and the text includes a substantial number of short utterances.

[1]  G. Zipf,et al.  The Psycho-Biology of Language , 1936 .

[2]  G. Āllport The Psycho-Biology of Language. , 1936 .

[3]  Leon Gordon Kraft,et al.  A device for quantizing, grouping, and coding amplitude-modulated pulses , 1949 .

[4]  G. Miller,et al.  Some effects of intermittent silence. , 1957, The American journal of psychology.

[5]  C. S. Wallace,et al.  An Information Measure for Classification , 1968, Comput. J..

[6]  Zellig S. Harris,et al.  Distributional Structure , 1954 .

[7]  J. Gerard Wolff,et al.  Language acquisition, data compression and generalization , 1982 .

[8]  C Snow,et al.  Child language data exchange system , 1984, Journal of Child Language.

[9]  M. V. Rossum,et al.  In Neural Computation , 2022 .

[10]  Ronald L. Rivest,et al.  Inferring Decision Trees Using the Minimum Description Length Principle , 1989, Inf. Comput..

[11]  Jeffrey L. Elman,et al.  Finding Structure in Time , 1990, Cogn. Sci..

[12]  Ian H. Witten,et al.  The zero-frequency problem: Estimating the probabilities of novel events in adaptive text compression , 1991, IEEE Trans. Inf. Theory.

[13]  R. Harald Baayen,et al.  A Stochastic Process for Word Frequency Distributions , 1991, ACL.

[14]  Kenneth Ward Church,et al.  A comparison of the enhanced Good-Turing and deleted estimation methods for estimating probabilities of English bigrams , 1991 .

[15]  Ming Li,et al.  An Introduction to Kolmogorov Complexity and Its Applications , 2019, Texts in Computer Science.

[16]  A. Norman Redlich,et al.  Redundancy Reduction as a Strategy for Unsupervised Learning , 1993, Neural Computation.

[17]  Andreas Stolcke,et al.  Bayesian learning of probabilistic language models , 1994 .

[18]  Carl de Marcken,et al.  The Unsupervised Acquisition of a Lexicon from Continuous Speech , 1995, ArXiv.

[19]  Michael R. Brent,et al.  Toward a Unified Model of Lexical Acquisition and Lexical Access , 1997 .

[20]  T. A. Cartwright,et al.  Distributional regularity and phonotactic constraints are useful for segmentation , 1996, Cognition.

[21]  M. Brent Advances in the computational study of language acquisition , 1996, Cognition.

[22]  E. Newport,et al.  WORD SEGMENTATION : THE ROLE OF DISTRIBUTIONAL CUES , 1996 .

[23]  Craig G. Nevill-Manning,et al.  Compression and Explanation Using Hierarchical Grammars , 1997, Comput. J..

[24]  Frederick Jelinek,et al.  Statistical methods for speech recognition , 1997 .

[25]  T. A. Cartwright,et al.  Syntactic categorization in early language acquisition: formalizing the role of distributional analysis , 1997, Cognition.

[26]  William I. Gasarch,et al.  Book Review: An introduction to Kolmogorov Complexity and its Applications Second Edition, 1997 by Ming Li and Paul Vitanyi (Springer (Graduate Text Series)) , 1997, SIGACT News.

[27]  Morten H. Christiansen,et al.  Learning to Segment Speech Using Multiple Cues: A Connectionist Model , 1998 .

[28]  Jorma Rissanen,et al.  Stochastic Complexity in Statistical Inquiry , 1989, World Scientific Series in Computer Science.

[29]  Darrel Hankerson,et al.  Introduction to Information Theory and Data Compression , 2003 .

[30]  M. Brent,et al.  On the discovery of novel wordlike units from utterances: an artificial-language study with implications for native-language acquisition. , 1999, Journal of experimental psychology. General.

[31]  M. Brent,et al.  On the discovery of novel wordlike units from utterances: an artificial-language study with implications for native-language acquisition. , 1999, Journal of experimental psychology. General.