Organization and operation of a connected speech understanding system at lexical, syntactic and semantic levels

This paper describes a connected speech understanding system being implemented in Nancy, thanks to the work done in automatic speech recognition since 1968. This system is made up of four parts : an acoustic recognizer which gives a string of phoneme-like segments from a spoken sentence, a syntactic parser which controls the recognition process, a word recognizer working on words predicted by the parser and a dialog procedure which takes in account semantic constraints in order to avoid some of the errors and ambiguities. Some original features of the system are pointed out : modularily (e.g. the language used is considered as a parameter), possibility of processing slightly syntactically incorrect sentences, ... The application both in data management and in oral control of a telephone center has given very promising results. Work is in progress for generalizing our model : extension of the vocabulary and of the grammar, multi-speaker operation, etc.