Depth Range Control in Visually Equivalent Light Field 3D

3D video contents depend on the shooting condition, which is camera positioning. Depth range control in the post-processing stage is not easy, but essential as the video from arbitrary camera positions must be generated. If light field information can be obtained, video from any viewpoint can be generated exactly and post-processing is possible. However, a light field has a huge amount of data, and capturing a light field is not easy. To compress data quantity, we proposed the visually equivalent light field (VELF), which uses the characteristics of human vision. Though a number of cameras are needed, VELF can be captured by a camera array. Since camera interpolation is made using linear blending, calculation is so simple that we can construct a ray distribution field of VELF by optical interpolation in the VELF3D display. It produces high image quality due to its high pixel usage efficiency. In this paper, we summarize the relationship between the characteristics of human vision, VELF and VELF3D display. We then propose a method to control the depth range for the observed image on the VELF3D display and discuss the effectiveness and limitations of displaying the processed image on the VELF3D display. Our method can be applied to other 3D displays. Since the calculation is just weighted averaging, it is suitable for real-time applications. key words: 3D display, light field, linear blending, depth range

[1]  Munekazu Date,et al.  360-degree screen-free floating 3D image in a crystal ball using a spatially imaged iris and rotational multiview DFD technologies. , 2017, Applied optics.

[2]  Marc Levoy,et al.  Light field rendering , 1996, SIGGRAPH.

[3]  R. L. de Valois,et al.  Psychophysical studies of monkey vision. 3. Spatial luminance contrast sensitivity tests of macaque and human observers. , 1974, Vision research.

[4]  Akira Kojima,et al.  Real-time viewpoint image synthesis using strips of multi-camera images , 2015, Electronic Imaging.

[5]  Munekazu Date,et al.  Highly Realistic 3D Display System for Space Composition Telecommunication , 2015, Journal of Display Technology.

[6]  J. Robson,et al.  Application of fourier analysis to the visibility of gratings , 1968, The Journal of physiology.

[7]  C. Tyler Spatial organization of binocular disparity sensitivity , 1975, Vision Research.

[8]  Hideaki Kimata,et al.  56‐5: Late‐News Paper: Table Top Visually Equivalent Light Field 3D Display Using 15.6‐inch 4K LCD Panel , 2019 .

[9]  Yusuke Gotoh,et al.  Depth reproducibility for inclined view in DFD (depth fused 3-D) display , 2005 .

[10]  Munekazu Date,et al.  66.3: Invited Paper: Smooth Motion Parallax Autostereoscopic 3D Display Using Linear Blending of Viewing Zones , 2015 .

[11]  Shuntaro C. Aoki,et al.  Microstructural properties of the vertical occipital fasciculus explain the variability in human stereoacuity , 2018, Proceedings of the National Academy of Sciences.

[12]  Hideaki Kimata,et al.  Accommodation response for visually equivalent light field 3D display , 2017, 2017 IEEE Industry Applications Society Annual Meeting.

[13]  Yoshimitsu Ohtani,et al.  Invited Paper: Depth reproducibility of multiview depth‐fused 3‐D display , 2010 .

[14]  Hideaki Kimata,et al.  Full Parallax Table Top 3D Display Using Visually Equivalent Light Field , 2019, 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR).

[15]  Akira Kojima,et al.  Viewpoint image generation for head tracking 3D display using multi-camera and approximate depth information , 2015 .

[16]  Munekazu Date,et al.  Luminance profile control method using gradation iris for autostereoscopic 3D displays , 2015, 2015 11th Conference on Lasers and Electro-Optics Pacific Rim (CLEO-PR).

[17]  Sakuichi Ohtsuka,et al.  Apparent 3-D image perceived from luminance-modulated two 2-D images displayed at different depths , 2004, Vision Research.