Speaker independence in automated lip-sync for audio-video communication

The present invention relates generally to improvements in carriers or receptacles and to packages resulting from the use thereof, said carriers being of the type particularly adapted to accommodate containers such as beverage cans and the like, wherein the extremity of the container is formed with an enlargement or bead. The embodiment of the container carrier or receptacle disclosed herein includes a strip of resilient, deformable, and elastic plastic material such as polyethylene, having a plurality of container-accommodating, longitudinally and transversely aligned apertures. These apertures are intended for telescopic association with the ends of the containers so that the margins of the strip aligning said apertures may be stretched and deformed to form circumferentially continuous lips embracing said containers beneath the peripheral enlargements thereof. Transversely aligned web sections connect said circumferentially continuous strip edges in longidudinal rows, and openings are provided for rendering said web sections frangible. Certain of said openings are of greater extent than others, whereby to facilitate the ease with which a portion of the strip supporting a plurality of containers may be detached as a unit from said strip.

[1]  Satnam Singh Dlay,et al.  Automated lip synchronisation for human-computer interaction and special effect animation , 1997, Proceedings of IEEE International Conference on Multimedia Computing and Systems.

[2]  Barrett Emil Koster Automatic LIP-SYNC: direct translation of speech sound to mouth animation , 1995 .

[3]  Alex Pentland,et al.  Recovering 3d lip structure from 2d observations using a model trained from video , 1997, AVSP.

[4]  David F. McAllister,et al.  Lip Synchronization as an Aid to the Hearing Impaired , 1997 .

[5]  Satoshi Nakamura,et al.  Speech to lip movement synthesis by HMM , 1997, AVSP ...

[6]  David F. McAllister,et al.  Lip synchronization of speech , 1997, AVSP.

[7]  D. Bitzer,et al.  Automated lip-sync: direct translation of speech-sound to mouth-shape , 1994, Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers.

[8]  Leonard T. Bruton,et al.  Lip synchronization in 3-D model based coding for video-conferencing , 1995, Proceedings of ISCAS'95 - International Symposium on Circuits and Systems.

[9]  Keith Waters,et al.  Computer facial animation , 1996 .

[10]  F. Lavagetto,et al.  Time-delay neural networks for estimating lip movements from speech analysis: a useful tool in audio-video synchronization , 1997, IEEE Trans. Circuits Syst. Video Technol..

[11]  Michael Vogt Interpreted multi-state lip models for audio-visual speech recognition , 1997, AVSP.

[12]  John Lewis,et al.  Automated lip-sync: Background and techniques , 1991, Comput. Animat. Virtual Worlds.

[13]  Eric Vatikiotis-Bateson,et al.  An hybrid approach to orientation-free liptracking , 1997, AVSP.

[14]  Hiroshi Harashima,et al.  Facial Animation Synthesis for Human-Machine Communication System , 1993, HCI.

[15]  David F. McAllister,et al.  Lip synchronization for animation , 1997, SIGGRAPH '97.