Real-Time Production and Delivery of 3D Media

The Prometheus project has investigated new ways of creating, distributing and displaying 3D television. The tools developed will also help today’s virtual studio production. 3D content is created by extension of the principles of a virtual studio to include realistic 3D representation of actors. Several techniques for this have been developed: • Texture-mapping of live video onto rough 3D actor models. • Fully-animated 3D avatars: • Photo-realistic body model generated from several still images of a person from different viewpoints. • Addition of a detailed head model taken from two close-up images of the head. • Tracking of face and body movements of a live performer using several cameras, to derive animation data which can be applied to the face and body. • Simulation of virtual clothing which can be applied to the animated avatars. MPEG-4 is used to distribute the content in its original 3D form. The 3D scene may be rendered in a form suitable for display on a ‘glasses-free’ 3D display, based on the principle of Integral Imaging. By assembling these elements in an end-to-end chain, the project has shown how a future 3D TV system could be realised. Furthermore, the tools developed will also improve the production methods available for conventional virtual studios, by focusing on sensor-free and markerless motion capture technology, methods for the rapid creation of photo-realistic virtual humans, and real-time clothing simulation.

[1]  Ken Perlin,et al.  Layered compositing of facial expression , 1997, SIGGRAPH '97.

[2]  A. Murat Tekalp,et al.  Face and 2-D mesh animation in MPEG-4 , 2000, Signal Process. Image Commun..

[3]  Jan Paul Siebert,et al.  Conformation from generic animatable models to 3D scanned data , 2001 .

[4]  Marc Price,et al.  3D VIRTUAL PRODUCTION AND DELIVERY USING MPEG-4 , 2000 .

[5]  Mark W. Jones,et al.  Using Distance Fields for Object Representation and Rendering , 2022 .

[6]  Takeo Kanade,et al.  Automated facial expression recognition based on FACS action units , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[7]  Adrian Hilton,et al.  Wand-based Calibration of Multiple Cameras , 2002 .

[8]  Oliver Grau,et al.  Use of 3D techniques for virtual production , 2000, IS&T/SPIE Electronic Imaging.

[9]  Jörgen Ahlberg AN UPDATED PARAMETERISED FACE , 2001 .

[10]  James N. Anderson,et al.  Fast physical simulation of virtual clothing based on multilevel approximation strategies , 1999 .

[11]  Wei Sun,et al.  Whole-body modelling of people from multiview images to populate virtual worlds , 2000, The Visual Computer.

[12]  P. Ekman,et al.  Facial action coding system: a technique for the measurement of facial movement , 1978 .

[13]  Peter Woodward,et al.  Implementing virtual studios in MPEG-4 , 2001, Proceedings of Workshop and Exhibition on MPEG-4 (Cat. No.01EX511).

[14]  Adrian Hilton,et al.  From 3D shape capture of animated models , 2002, Proceedings. First International Symposium on 3D Data Processing Visualization and Transmission.

[15]  Neil A. Dodgson,et al.  Autonomous Secondary Gaze Behaviours , 2002 .

[16]  Adrian Hilton,et al.  Animated statues , 2003, Machine Vision and Applications.

[17]  David Chatting,et al.  The Prometheus Project — The Challenge of Disembodied and Dislocated Performances , 2002 .

[18]  Yi Li,et al.  A relaxation algorithm for real-time multiple view 3D-tracking , 2002, Image Vis. Comput..

[19]  Nicolas D. Georganas,et al.  MPEG-4 BIFS streaming of large virtual environments and their animation on the web , 2002, Web3D '02.

[20]  David Machin,et al.  Real-time facial motion analysis for virtual teleconferencing , 1996, Proceedings of the Second International Conference on Automatic Face and Gesture Recognition.