The structure from motion problem has been extensively studied in the field of computer vision. Yet, the bulk of the existing work assumes that the scene contains only a single moving object. The more realistic case where an unknown number of objects move in the scene has received little attention, especially for its theoretical treatment. We present a new method for separating and recovering the motion and shape of multiple independently moving objects in a sequence of images. The method does not require prior knowledge of the number of objects, nor is dependent on any grouping of features into an object at the image level. For this purpose, we introduce a mathematical construct of object shapes, called the shape interaction matrix, which is invariant to both the object motions and the selection of coordinate systems. This invariant structure is computable solely from the observed trajectories of image features without grouping them into individual objects. Once the structure is computed, it allows for segmenting features into objects by the process of transforming it into a canonical form, as well as recovering the shape and motion of each object.<<ETX>>
[1]
D. Sinclair.
Motion segmentation and local structure
,
1993,
1993 (4th) International Conference on Computer Vision.
[2]
C. W. Gear,et al.
Feature grouping in moving objects
,
1994,
Proceedings of 1994 IEEE Workshop on Motion of Non-rigid and Articulated Objects.
[3]
A. Rosenfeld,et al.
Perceptual motion transparency : the role of geometrical information
,
1992
.
[4]
E H Adelson,et al.
Spatiotemporal energy models for the perception of motion.
,
1985,
Journal of the Optical Society of America. A, Optics and image science.
[5]
Michal Irani,et al.
Computing occluding and transparent motions
,
1994,
International Journal of Computer Vision.
[6]
T. Boult,et al.
Factorization-based segmentation of motions
,
1991,
Proceedings of the IEEE Workshop on Visual Motion.
[7]
Takeo Kanade,et al.
A Paraperspective Factorization Method for Shape and Motion Recovery
,
1994,
ECCV.