Visual information and redundancy conveyed by internal articulator dynamics in synthetic audiovisual speech

This paper reports results of a study investigating the visual information conveyed by the dynamics of internal articulators. Intelligibility of synthetic audiovisual speech with and without visualization of the internal articulator movements was compared. Additionally speech recognition scores were contrasted before and after a short learning lesson in which articulator trajectories were explained, once with and once without motion of internal articulators. Results show that the motion information of internal articulator dynamics did not lead to significant different recognition scores at first, and that only in case of this additional visual information the training lesson was able to significantly increase visual and audiovisual speech intelligibility. After the learning lesson with all internal articulatory movements the visual recognition could be enhanced to a higher degree than the audiovisual recognition. The absolute increase of visual recognition could not be integrated completely into audiovisual recognition. It could be shown that this was due to redundant information conveyed by auditory and visual sources of information.