A study of real time facial expression detection for virtual space teleconferencing

A new method for real-time detection of facial expressions from time-sequential images is proposed. The proposed method does not need the tape marks that were pasted to the face for detecting expressions in real-time in the current implementation for the virtual space teleconferencing. In the proposed method, four windows are applied to the four areas in the face image: the left and right eyes, mouth and forehead. Each window is divided into blocks that consist of 8 by 8 pixels. The discrete cosine transform (DCT) is applied to each block, and the feature vector of each window is obtained from taking the summations of the DCT energies in the horizontal, vertical and diagonal directions. By a conversion table, the feature vectors are related to real 3D movements in the face. Experiment show some promising results for accurate expression detection and for the realization of real-time hardware implementation of the proposed method.