Optimality-Theoretic learning in the Praat program
暂无分享,去创建一个
This tutorial yields a step-by-step introduction to stochastic OT grammars and about how you can use the Gradual Learning Algorithm available in the Praat program to help you rank Optimality-Theoretic constraints in ordinal and stochastic grammars. This tutorial describes how you can draw Optimality-Theoretic tableaus and simulate Optimality-Theoretic learning with the Praat program (Boersma & Weenink 1992-2000). 1. Kinds of OT grammars According to Prince & Smolensky (1993), an Optimality-Theoretic (OT) grammar consists of a number of ranked constraints. For every possible input (underlying form), GEN generates a (possibly very large) number of output candidates, and the ranking order of the constraints determines the winning candidate, which becomes the single optimal output. In OT, ranking is strict, i.e., if a constraint A is ranked higher than the constraints B, C, and D, a candidate that violates only constraint A will always be beaten by any candidate that respects A (and any higher constraints), even if it violates B, C, and D. — Ordinal OT grammars. Because only the ranking order of the constraints plays a role in evaluating the output candidates, the grammar was taken to contain no absolute ranking values, i.e., there was only an ordinal relation between the constraint rankings. For such a grammar, Tesar & Smolensky (1998) devised a learning algorithm (ErrorDriven Constraint Demotion, EDCD) that changes the complete ranking order with every learning step, i.e. whenever the form produced by the learner is different from the adult form. — Stochastic OT grammars. The EDCD algorithm is extremely sensitive to errors in the learning data, it cannot deal with language variation, and it does not show realistic gradual learning curves. For these reasons, Boersma (to appear; 1997; 1998: chs.14–15) proposed stochastic constraint grammars in which every constraint has a ranking value along a continuous ranking scale, and a small amount of noise is added to this ranking value at evaluation time. The associated error-driven learning algorithm (Gradual Learning Algorithm, GLA) effects small changes in the ranking values of the constraints with every learning step. * The text of this paper is virtually identical to the OT learning tutorial and the OTGrammar manual page as available from the Help menus in the Praat program (version December 1998).
[1] P. Boersma. How we learn variation, optionality and probalility , 1997 .
[2] P. Boersma,et al. Empirical Tests of the Gradual Learning Algorithm , 2001, Linguistic Inquiry.
[3] Walter Daelemans,et al. Learnability in Optimality Theory , 2000, Linguistic Inquiry.
[4] Jeroen van de Weijer,et al. Optimality theory : phonology, syntax, and acquisition , 2000 .
[5] Paul Boersma,et al. Learning a grammar in Functional Phonology , 2000 .