Syntax and semantics in a distributed speech understanding system
暂无分享,去创建一个
The Hearsay II speech understanding system being developed at Carnegie-Mellon University has an independent knowledge source module for each type of speech knowledge. Modules communicate by reading, writing, and modifying hypotheses about various constituents of the spoken utterance in a global data structure. The syntax and semantics module uses rules (productions) of four types: (1) recognition rules for generating a phrase hypothesis when its needed constituents have already been hypothesized; (2) prediction rules for inferring the likely presence of a word or phrase from previously recognized portions of the utterance; (3) respelling rules for hypothesizing the constituents of a predicted phrase; and (4) postdiction rules for supporting an existing hypothesis on the basis of additional confirming evidence. The rules are automatically generated from a declarative (Le., non-procedural) description of the grammar and semantics, and are embedded in a parallel recognition network for efficient retrieval of applicable rules. The current grammar uses a 450-word vocabulary and accepts simple English queries for an information retrieval system.
[1] Victor R. Lesser,et al. Focus of attention in a distributed-logic speech understanding system , 1976, ICASSP.
[2] Lee D. Erman,et al. A model and a system for machine recognition of speech , 1973 .
[3] Frederick Hayes-Roth,et al. An Automatically Compilable Recognition Network For Structured Patterns , 1975, IJCAI.
[4] Victor Lesser,et al. Organization of the Hearsay II speech understanding system , 1975 .
[5] Raj Reddy,et al. The HEARSAY Speech Understanding System , 1974 .