Automatic LIP-SYNC: direct translation of speech sound to mouth animation
暂无分享,去创建一个
The goal of automatic lip-sync is to translate speech sounds into mouth shapes. Although this seems related to Speech Recognition, the direct map from sound to shape avoids many language problems associated with Speech Recognition and provides a unique domain for error correction.
The method of automatic lip-sync developed here is to compute various moment functions of the spectrum and correlate them to mouth shape on sounds whose shapes are known. The correlations are then used to predict mouth shape on new sounds.
Among other things, automatic lip-sync animation may be used for animating cartoons realistically and as an aid to the hearing disabled.