Fast Matching of a Dynamic Lip Model to Color Video Sequences under Regular Illumination Conditions

Automatic speechreading is based on a robust lip image analysis. In this work, no special illumination or lip make-up is used. Compared to other approaches the analysis is based on true color video images. The system allows for realtime tracking and storage of the lip region and robust off-line lip model matching. The proposed model is based on cubic outline curves. A neural classifier detects visibility of teeth edges and other attributes. At this stage of the approach the edge between the closed lips is automatically modeled if applicable, based on a neural network’s decision. Further interior model parts for teeth and lip opening may be applied in the future. To allow fast model adaptation, image processing is only performed where necessary. An image processing cache stores results for fast further access. The energy function, which is minimized during adaptation, considers internal model forces (springs) as well as processed image data, outer forces, and dynamic constraints. The rotationally invariant model allows easy extraction of spatial and dynamic parameters. Also principal component analysis or neural dimension reduction techniques may be applied to obtain significant features for recognition.