Tagging Video Contents with Positive/Negative Interest Based on User's Facial Expression

Recently, there are so many videos available for people to choose to watch. To solve this problem, we propose a tagging system for video content based on facial expression that can be used for recommendations based on video content. Viewer's face captured by a camera is extracted by Elastic Bunch Graph Matching, and the facial expression is recognized by Support Vector Machines. The facial expression is classified into Neutral, Positive, Negative and Rejective. Recognition results are recorded as "facial expression tags" in synchronization with video content. Experimental results achieved an averaged recall rate of 87.61%, and averaged precision rate of 88.03%.

[1]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[2]  Paul Over,et al.  Evaluation campaigns and TRECVid , 2006, MIR '06.

[3]  Makoto Yamamoto,et al.  Estimating Intervals of Interest During TV Viewing for Automatic Personal Preference Acquisition , 2006, PCM.

[4]  Michael J. Lyons,et al.  Coding facial expressions with Gabor wavelets , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[5]  Paul A. Viola,et al.  Rapid object detection using a boosted cascade of simple features , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[6]  Norbert Krüger,et al.  Face Recognition by Elastic Bunch Graph Matching , 1997, CAIP.