Lip Tracking Using Deformable Models and Geometric Approaches

Multimodal biometrics addresses the issue of recognizing and validating the identity of a person; however, the issue is for a single modality to be robust enough. Voice, being a simple biometric feature to acquire, and the accompanying movement of the lips being distinct for every person can stand up to this challenge. Tracking Lip Movement in real time can be an important biometric trait for Person Recognition and Authentication. A biometric system should be robust and secure especially when they must be deployed in vital domains. In this paper, we have used three different methods to draw the contours around the lips to detect the lip edges to establish lip movement. In the first method, dynamic lip edge patterns were drawn and simultaneously saved in a database created for each person. Secondly, Lip contours were created using Active Snakes models while in the third technique, the motion extraction of lip was implemented using edge detection algorithm. In the work presented here, we have segmented the lip region from the facial images, and have implemented and compared three different approaches to contour the lip region. However, no single model fits every application due to the various face poses and unpredicted lip movements. The target application should be the deciding factor for the consideration of lip movement as the biometric modality along with the voice biometric trait.

[1]  Vaishali Kulkarni,et al.  A comparison of performance evaluation of ASR for noisy and enhanced signal using GMM , 2016, 2016 International Conference on Computing, Analytics and Security Trends (CAST).

[2]  Takeshi Saitoh,et al.  Optical flow based lip reading using non rectangular ROI and head motion reduction , 2015, 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG).

[3]  Jing Xu,et al.  Lip Detection and Tracking Using Variance Based Haar-Like Features and Kalman filter , 2010, 2010 Fifth International Conference on Frontier of Computer Science and Technology.

[4]  Hanseok Ko,et al.  Effective lip localization and tracking for achieving multimodal speech recognition , 2009 .

[5]  Naomi Harte,et al.  Region of interest extraction using colour based methods on the CUAVE database , 2009 .

[6]  Bingliang Hu,et al.  A new spectral image assessment based on energy of structural distortion , 2009, 2009 International Conference on Image Analysis and Signal Processing.

[7]  Gaurav Agrawal,et al.  Automatic Lip Contour Tracking and Visual Character Recognition for Computerized Lip Reading , 2009 .

[8]  Brian C. Lovell,et al.  Multi-Region Probabilistic Histograms for Robust and Scalable Identity Inference , 2009, ICB.

[9]  Sunil S. Morade,et al.  Automatic Lip Tracking and Extraction of Lip Geometric Features for Lip Reading , 2013 .

[10]  B. K. Tripathy,et al.  Design and Implementation of Face Recognition System in Matlab Using the Features of Lips , 2012 .

[11]  Wei Wu,et al.  An algorithm of lips secondary positioning and feature extraction based on YCbCr color space , 2015 .

[12]  P. Johnhubert,et al.  Lip and Head Gesture Recognition Based Pc Interface Using Image Processing , 2015 .

[13]  Sadaoki Furui,et al.  Concatenated phoneme models for text-variable speaker recognition , 1993, 1993 IEEE International Conference on Acoustics, Speech, and Signal Processing.