Segmentation of people in motion

A method for segmenting monocular images of people in motion from a cinematic sequence of frames is described. This method is based on image intensities, motion, and an object model-i.e., a model of the image of a person in motion. Though each part of a person may move in different directions at any instant, the time averaged motion of all parts must converge to a global average value over a few seconds. People in an image may be occluded by other people, and usually it is not easy to detect their boundaries. These boundaries can be detected with motion information if they move in different directions, even if there are almost no apparent differences among object intensities or colors. Each image of a person in a scene usually can be divided into several parts, each with distinct intensities or colors. The parts of a person can be merged into a single group by an iterative merging algorithm based on the object model and the motion information because the parts move coherently. This merging is analogous to the property of perceptual grouping in human visual perception of motion. Experiments based on a sequence of complex real scenes produced results that are supportive of the authors approach to the segmentation of people in motion.<<ETX>>

[1]  G. Johansson PERCEPTION OF MOTION AND CHANGING FORM: A study of visual perception from continuous transformations of a solid angle of light at the eye , 1964 .

[2]  C. Chow,et al.  Automatic boundary detection of the left ventricle from cineangiograms. , 1972, Computers and biomedical research, an international journal.

[3]  Keith Price,et al.  Picture Segmentation Using a Recursive Region Splitting Method , 1998 .

[4]  J. O'Rourke,et al.  Model-based image analysis of human motion using constraint propagation , 1980, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  Berthold K. P. Horn,et al.  Determining Optical Flow , 1981, Other Conferences.

[6]  S. Palmer The Psychology of Perceptual Organization: A Transformational Approach , 1983 .

[7]  Gilad Adiv,et al.  Determining Three-Dimensional Motion and Structure from Optical Flow Generated by Several Moving Objects , 1985, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[8]  Hans-Hellmut Nagel,et al.  An Investigation of Smoothness Constraints for the Estimation of Displacement Vector Fields from Image Sequences , 1983, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  Berthold K. P. Horn Robot vision , 1986, MIT electrical engineering and computer science series.

[10]  John F. Canny,et al.  A Computational Approach to Edge Detection , 1986, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  David W. Murray,et al.  Scene Segmentation from Visual Motion Using Global Optimization , 1987, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Jake K. Aggarwal,et al.  On the computation of motion from sequences of images-A review , 1988, Proc. IEEE.

[13]  Peter J. Burt,et al.  Object tracking with a moving camera , 1989, [1989] Proceedings. Workshop on Visual Motion.

[14]  Donald D. Hoffman,et al.  Discriminating rigid from nonrigid motion: Minimum points and views , 1990, Perception & psychophysics.

[15]  Thomas S. Huang,et al.  Modeling, analysis, and visualization of nonrigid object motion , 1990, [1990] Proceedings. 10th International Conference on Pattern Recognition.

[16]  Shmuel Peleg,et al.  Motion based segmentation , 1990, [1990] Proceedings. 10th International Conference on Pattern Recognition.