Modeling spatial-temporal patterns in facial articulation

In this paper, a new method of modeling human facial articulation is proposed. The approach contains three major parts: the spatial dimension reduction through principal component analysis; the temporal function approximation using the sample basis function which is similar to the facial articulation process; and the learning algorithm which can improve the recognition and the compression capability. This scheme is also used for encoding facial articulation parameter sequences. Though developed based on FAPS (MPEG4 facial animation parameter set), the algorithm can be easily applied to other parameter representations.