Depth segmentation and occluded scene reconstruction using ego-motion

This paper introduces a signal processing strategy for depth segmentation and scene reconstruction that incorporates occlusion as a natural component. The work aims to maximize the use of connectivity in the temporal domain as much as possible under the condition that the scene is static and that the camera motion is known. An object behind the foreground is reconstructed using the fact that different parts of the object have been seen in different images in the sequence. One of the main ideas in this paper is the use of a spatio- temporal certainty volume c(x) with the same dimension as the input spatio-temporal volume s(x), and then use c(x) as a 'blackboard' for rejecting already segmented image structures. The segmentation starts with searching for image structures in the foreground, eliminate their occluding influence, and then proceed. Normalized convolution, which is a Weighted Least Mean Square technique for filtering data with varying spatial reliability, is used for all filtering. High spatial resolution near object borders is achieved and only neighboring structures with similar depth supports each other.

[1]  Samia Boukir,et al.  Structure From Controlled Motion , 1996, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  F McLauchlanPhilip,et al.  Active Camera Calibration for a Head-Eye Platform Using the Variable State-Dimension Filter , 1996 .

[3]  Alex Pentland,et al.  Recursive Estimation of Motion, Structure, and Focal Length , 1995, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  R. Chellappa,et al.  Recursive 3-D motion estimation from a monocular image sequence , 1990 .

[5]  Edward H. Adelson,et al.  Layered representation for motion analysis , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[6]  O. Faugeras,et al.  Motion from point matches: Multiplicity of solutions , 1989, [1989] Proceedings. Workshop on Visual Motion.