AMOS: an active system for MPEG-4 video object segmentation

Object segmentation and tracking is a fundamental step for many digital video applications. In this paper, we present an active system (AMOS) which combines low level automatic region segmentation with an active method for defining and tracking high-level semantic video objects. The system contains two stages: an initial object segmentation stage where user input in the starting frame is used to create a semantic object; and an object tracking stage where underlying regions of the semantic object are tracked and grouped through successive frames. Experiments with different types of videos show very good performance.

[1]  Shih-Fu Chang,et al.  Spatio-temporal video search using the object based video representation , 1997, Proceedings of International Conference on Image Processing.

[2]  Shih-Fu Chang,et al.  Video object model and segmentation for content-based video indexing , 1997, Proceedings of 1997 IEEE International Symposium on Circuits and Systems. Circuits and Systems in the Information Age ISCAS '97.

[3]  Shih-Fu Chang,et al.  VideoQ: an automated content based video search system using visual cues , 1997, MULTIMEDIA '97.

[4]  Naonori Ueda,et al.  Tracking Moving Contours Using Energy-Minimizing Elastic Contour Models , 1992, ECCV.

[5]  A. Murat Tekalp,et al.  Fusion of color and edge information for improved segmentation and edge linking , 1996, 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings.

[6]  M. Bierling,et al.  Displacement Estimation By Hierarchical Blockmatching , 1988, Other Conferences.

[7]  Ming-Chieh Lee,et al.  Semantic video object segmentation and tracking using mathematical morphology and perspective motion model , 1997, Proceedings of International Conference on Image Processing.