Estimating speech from lip dynamics

The goal of this project is to develop a limited lip reading algorithm for a subset of the English language. We consider a scenario in which no audio information is available. The raw video is processed and the position of the lips in each frame is extracted. We then prepare the lip data for processing and classify the lips into visemes and phonemes. Hidden Markov Models are used to predict the words the speaker is saying based on the sequences of classified phonemes and visemes. The GRID audiovisual sentence corpus [10][11] database is used for our study.

[1]  Yangsheng Xu,et al.  Hidden Markov Model for Gesture Recognition , 1994 .

[2]  Wu Chou,et al.  Acoustic driven viseme identification for face animation , 1997, Proceedings of First Signal Processing Society Workshop on Multimedia Signal Processing.

[3]  Jenq-Neng Hwang,et al.  Baum-Welch hidden Markov model inversion for reliable audio-to-visual conversion , 1999, 1999 IEEE Third Workshop on Multimedia Signal Processing (Cat. No.99TH8451).

[4]  Naomi Harte,et al.  Phoneme-to-viseme Mapping for Visual Speech Recognition , 2012, ICPRAM.

[5]  Lawrence R. Rabiner,et al.  A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.

[6]  Ahmad Basheer Hassanat,et al.  Visual Words for Automatic Lip-Reading , 2014, ArXiv.

[7]  Frédo Durand,et al.  The visual microphone , 2014, ACM Trans. Graph..

[8]  Thad Starner,et al.  Visual Recognition of American Sign Language Using Hidden Markov Models. , 1995 .