Realistic physical camera motion for light field visualization

Light field displays depict the real world by emitting light rays corresponding to the 3D space of the scene or object that is to be represented. Therefore, such displays are suitable for utilization in a multitude of use cases, including the cinema. Cinematography is the art encompassing motion picture photography. It describes a great number of methods for working with physical cameras. For example, a camera can be mounted on a car in a typical chase scene, attached to a spring, suspended or mounted on a colliding object and many more. Although these camera rigs may produce exceptional visuals, achieving similar results by using light field cameras will be extremely challenging. In order to capture light fields, two sets of possible solutions can occur. These include capturing light fields by means of an actual light field camera or via an array of cameras, where the latter may be planar or curved. Using one way or the other is rather demanding, as each method has its own limitations and difficulties. When using a light field camera, the captured light field should map to that of the light field display, on which it shall be visualized. Whereas capturing a scene with a camera array, can unfortunately lead to self-capture. Moreover, the portability of the camera arrays is evidently problematic, due to the sheer size and weight. In addition to these challenges, light field rendering on its own is far from being trivial. While rendering to conventional 2D displays from camera arrays may use image-based rendering, where many views of the scene can be set up from pre-captured images, light fields are represented by 5D plenoptic functions that are not easy to capture with conventional camera arrays. Moreover, image-based rendering techniques often fail to produce convincing results for light field displays. For some use cases, they can be reduced into 4D for horizontal- parallax-only light fields, since our eyes are horizontally separated and horizontal motion is more frequent than vertical. In practice, the creation of a light field scene from a set of images requires injecting each 2D image into a 4D light field representation. In this paper, we visualize different simulations for realistic physical camera motions on a real light field display. In order to overcome the aforementioned problems, virtual cameras were used to simulate a set of different physical camera motions used in cinematography for light field displays. In applications, physics simulation libraries include algorithms for the dynamics of soft as well as rigid bodies. Moreover, collision detection is also accounted for. Many tools have been devised to simulate physics, among which is the Bullet Physics library. In our work, we used the Bullet Physics library in order to generate realistic physical camera motions as well as physical environment simulations for light field displays. The limitations and challenges imposed by light field displays when simulating physical camera motions are discussed, along with the results and the produced outputs.

[1]  Rin-ichiro Taniguchi,et al.  Camera array calibration for light field acquisition , 2015, Frontiers of Computer Science.

[2]  Maud Marchal,et al.  Evaluation of Physical Simulation Libraries for Haptic Rendering of Contacts Between Rigid Bodies , 2010 .

[3]  Shing-Chow Chan,et al.  Light Field , 2014, Computer Vision, A Reference Guide.

[4]  Adrian Munteanu,et al.  Depth Estimation for Light-Field Images Using Stereo Matching and Convolutional Neural Networks , 2020, Sensors.

[5]  Marc Christie,et al.  The director's lens: an intelligent assistant for virtual cinematography , 2011, ACM Multimedia.

[6]  Christopher M. Clark,et al.  Cinematographic and Geometric Criteria for Virtual Camera Path Generation for the Visualization of Shipwreck Data , 2018, VISIGRAPP.

[7]  Joseph V. Maschelli,et al.  The Five C's of Cinematography , 1965 .

[8]  David Salesin,et al.  The virtual cinematographer: a paradigm for automatic real-time camera control and directing , 1996, SIGGRAPH.

[9]  E. Adelson,et al.  The Plenoptic Function and the Elements of Early Vision , 1991 .

[10]  Marc Levoy,et al.  Light field rendering , 1996, SIGGRAPH.

[11]  Miao Ma,et al.  Occluded-Object 3D Reconstruction Using Camera Array Synthetic Aperture Imaging , 2019, Sensors.

[12]  G. Lippmann Epreuves reversibles donnant la sensation du relief , 1908 .

[13]  Gordon Wetzstein,et al.  Computational plenoptic image acquisition and display , 2011 .

[14]  Maria G. Martini,et al.  Subjective quality assessment of zooming levels and image reconstructions based on region of interest for light field displays , 2016, 2016 International Conference on 3D Imaging (IC3D).

[15]  Maria G. Martini,et al.  Cinema as large as life: Large-scale light field cinema system , 2017, 2017 International Conference on 3D Immersion (IC3D).

[16]  Chao-Hung Lin,et al.  Efficient camera path planning algorithm for human motion overview , 2011, Comput. Animat. Virtual Worlds.

[17]  B. Barsky,et al.  Eyeglasses-free display: towards correcting visual aberrations with computational light field displays , 2014, ACM Trans. Graph..

[18]  Thomas Bräunl,et al.  Evaluation of real-time physics simulation systems , 2007, GRAPHITE '07.

[19]  Nikhil Balram,et al.  Light Field Imaging and Display Systems , 2017 .

[20]  Maria G. Martini,et al.  Light-field capture and display systems: limitations, challenges, and potentials , 2018, Optical Engineering + Applications.