RE@CT: A new production pipeline for interactive 3D content

The RE@CT project set out to revolutionise the production of realistic 3D characters for game-like applications and interactive video productions, and significantly reduce costs by developing an automated process to extract and represent animated characters from actor performance captured in a multi-camera studio. The key innovation is the development of methods for analysis and representation of 3D video to allow reuse for real-time interactive animation. This enables efficient authoring of interactive characters with video quality appearance and motion.

[1]  Edmond Boyer,et al.  Progressive shape models , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[2]  Peter Eisert,et al.  High-resolution depth for binocular image-based modeling , 2014, Comput. Graph..

[3]  Emilio Maggio,et al.  RE@CT - Immersive Production and Delivery of Interactive 3D Content , 2012 .

[4]  Adrian Hilton,et al.  Wide Baseline Multi-view Video Matting Using a Hybrid Markov Random Field , 2014, 2014 22nd International Conference on Pattern Recognition.

[5]  Martin Klaudiny,et al.  Global Non-rigid Alignment of Surface Sequences , 2013, International Journal of Computer Vision.

[6]  Chi-Ho Chan,et al.  Robust face recognition by an albedo based 3D morphable model , 2014, IEEE International Joint Conference on Biometrics.

[7]  Oliver Grau,et al.  A System for Distributed Multi-camera Capture and Processing , 2010, 2010 Conference on Visual Media Production.

[8]  Peter Eisert,et al.  Realistic retargeting of facial video , 2014, CVMP.

[9]  Edmond Boyer,et al.  3D Shape Cropping , 2013, VMV.

[10]  Edmond Boyer,et al.  An efficient volumetric framework for shape tracking , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Patrick Pérez,et al.  Multi-view Object Segmentation in Space and Time , 2013, 2013 IEEE International Conference on Computer Vision.

[12]  Peter Eisert,et al.  High detail flexible viewpoint facial video from monocular input using static geometric proxies , 2013, MIRAGE '13.

[13]  Adrian Hilton,et al.  Human motion synthesis from 3D video , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[14]  Jean-Yves Guillemaut,et al.  Interactive Animation of 4D Performance Capture , 2013, IEEE Transactions on Visualization and Computer Graphics.

[15]  Adrian Hilton,et al.  Optimal Representation of Multiple View Video , 2014, BMVC.

[16]  Adrian Hilton,et al.  A Layered Model of Human Body and Garment Deformation , 2014, 2014 2nd International Conference on 3D Vision.

[17]  Vagia Tsiminaki,et al.  High Resolution 3D Shape Texture from Multiple Videos , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[18]  Jean-Yves Guillemaut,et al.  4D parametric motion graphs for interactive animation , 2012, I3D '12.

[19]  Adrian Hilton,et al.  Hybrid Skeletal-Surface Motion Graphs for Character Animation from 4D Performance Capture , 2015, TOGS.

[20]  J. Kautz 4D Video Textures for Interactive Character Appearance , 2013 .

[21]  Edmond Boyer,et al.  On Mean Pose and Variability of 3D Deformable Models , 2014, ECCV.