FINE-GRAINED VIEW INTERPOLATION IN MULTI-VIEW AND LIGHT-FIELD CODING
暂无分享,去创建一个
Techniques are described for increasing the number and accuracy of views in multiview, three-dimensional (3D), and light-field systems, by leveraging low resolution sensors to provide geometry and object location calibration when interpolating new views. DETAILED DESCRIPTION Modelling background occlusion and insertion is a key problem in Virtual Reality (VR), 3D, and especially Augmented Reality (AR) video communication. As participants in a call move, the area of the background that is visible to the participants' changes. A multi-camera system can select views that correspond to the relative position of the participants and can display an appropriate view. However, in practical systems the number of cameras is limited, and additional views need to be interpolated. This interpolation must solve boundary problems such as determining the revealed/concealed boundary between the foreground objects and the background. It must also solve distortion problems, such as undoing the projection effects of each of the views to get a realistic interpolated new of a rounded object with significant depth. For realistic experience, both the segmentation and the geometrical warping need to be accurate, but an interpolated view may misplace the location of objects by significant amounts. These problems are difficult to solve because the camera positions are highly undersampled spatially and determining the occlusion boundaries is highly uncertain as the light rays that determine their location are tangential to objects with depth. If the objects were flat, one could be sure that light rays at that edge would brush the edge at the same point of whichever camera they entered, but this is not so in the real world. Estimating the edge positions in a virtual view therefore has a lot of uncertainty. The basic problem of view interpolation is illustrated in Figure 1 and Figure 2, below. 2 Davies: FINE-GRAINED VIEW INTERPOLATION IN MULTI-VIEW AND LIGHT-FIELD COD Published by Technical Disclosure Commons, 2019