Visual speech synthesis for speech perception experiments
暂无分享,去创建一个
Analytical investigations of speech perception in the audio‐visual domain require a visual stimulus that is plausibly lifelike, controllable and well‐specified. A computer package has been developed to produce real‐time animated graphics which simulate the front‐facial topography and articulatory movements of the lips and jaw during VCV speech utterances. It is highly modular and can simulate a wide range of facial features, shapes, and movements. It is currently driven by streams of time‐varying positional data obtained from experimental measurements of human speakers enunciating VCV utterances. The measurements of a series of point coordinates are made from sequential single frames of a videotape recording using a microprocessor‐linked data‐logging device. Corrections are made for the effects of global head and body movements. This is the lowest level of control in a hierarchy whose higher levels could include algorithms for generating the articulatory trajectories by rule from phonetic transcriptions. ...