Real-time 3D character integration into a real-world environment using reconstructed z depth

We present a novel and efficient pipeline for interactive entertainment integrating a real-time 3D character into real-world environments. Instead of conventional 3D environment creation, our method reconstructs the z depth of real-world video footage, allowing a 3D character to navigate it in real-time. Three processes afford this: 3D match moving, photogrammetric 3D model generation of environments, and HDRI lighting. 3D match moving software determines the position of each real-world camera shot in relation to 3D model space. Environment geometry is rendered as invisible shadow-catchers, beyond which video footage itself is projected as a texture. 3D characters, for example, are then added, lit with HDR images of the location, and made interactive in a game engine.