Robust motion-based image segmentation using fusion

To support real-time tracking of objects in video sequences, there has been considerable effort directed at developing optical flow and general motion-based image segmentation algorithms. The goal is to segment multiple moving objects in the image based on their relative motion. This task can be complicated by the presence of lighting variations. Furthermore, a combination of multiple motions and complex lighting effects can lead to dramatic image variations that may not be adequately accounted for by a single motion-based segmentation algorithm. We propose to fuse the results of multiple motion segmentation algorithms to improve the system robustness. Our approach uses the expectation maximization (EM) algorithm as a fusion engine. It also uses principal components analysis (PCA) to perform dimensionality reduction to improve the performance of the EM algorithm and reduce the processing burden. The performance of the proposed fusion algorithm has been demonstrated in the "smart airbag" application of monitoring occupants in a moving automobile to determine if they are too close to the instrument panel (airbag). Through fusion of the outputs of multiple algorithms we are able to reduce the percentage of pixels missed on the target by 35.

[1]  Shigeo Abe DrEng Pattern Classification , 2001, Springer London.

[2]  Marc M. Van Hulle,et al.  A phase-based approach to the estimation of the optical flow field using spatial filtering , 2002, IEEE Trans. Neural Networks.

[3]  David G. Stork,et al.  Pattern classification, 2nd Edition , 2000 .

[4]  Hany Farid,et al.  Elastic registration in the presence of intensity variations , 2003, IEEE Transactions on Medical Imaging.

[5]  Shahriar Negahdaripour,et al.  Revised Definition of Optical Flow: Integration of Radiometric and Geometric Cues for Dynamic Scene Analysis , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Steven S. Beauchemin,et al.  The computation of optical flow , 1995, CSUR.

[7]  David J. Fleet,et al.  Computation of component image velocity from local phase information , 1990, International Journal of Computer Vision.

[8]  W. F. Ranson,et al.  Determination of displacements using an improved digital correlation method , 1983, Image Vis. Comput..

[9]  Hidetoshi Miike,et al.  Detection of motion fields under spatio-temporal non-uniform illumination , 1999, Image Vis. Comput..

[10]  Claude L. Fennema,et al.  Velocity determination in scenes containing several moving objects , 1979 .

[11]  Michael J. Black,et al.  Mixture models for optical flow computation , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[12]  Alexander A. Sawchuk,et al.  A region matching motion estimation algorithm , 1991, CVGIP Image Underst..

[13]  張林 Detection of Motion Fields under Spatio-Temporal Non-uniform Illumination , 1999 .

[14]  Anil K. Jain,et al.  Interacting multiple model (IMM) Kalman filters for robust high speed human motion tracking , 2002, Object recognition supported by user interaction for service robots.

[15]  Robert P. W. Duin,et al.  A note on comparing classifiers , 1996, Pattern Recognit. Lett..