Modeling and estimating persistent motion with geometric flows

We propose a principled framework to model persistent motion in dynamic scenes. In contrast to previous efforts on object tracking and optical flow estimation that focus on local motion, we primarily aim at inferring a global model of persistent and collective dynamics. With this in mind, we first introduce the concept of geometric flow that describes motion simultaneously over space and time, and derive a vector space representation based on Lie algebra. We then extend it to model complex motion by combining multiple flows in a geometrically consistent manner. Taking advantage of the linear nature of this representation, we formulate a stochastic flow model, and incorporate a Gaussian process to capture the spatial coherence more effectively. This model leads to an efficient and robust algorithm that can integrate both point pairs and frame differences in motion estimation. We conducted experiments on different types of videos. The results clearly demonstrate that the proposed approach is effective in modeling persistent motion.

[1]  Qiang Ji,et al.  Spatio-Temporal Context for Robust Multitarget Tracking , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  Mubarak Shah,et al.  A Lagrangian Particle Dynamics Approach for Crowd Flow Segmentation and Stability Analysis , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[3]  Daniel Cremers,et al.  High resolution motion layer decomposition using dual-space graph cuts , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[4]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[5]  Nuno Vasconcelos,et al.  Modeling, Clustering, and Segmenting Video with Mixtures of Dynamic Textures , 2008, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  Nicolas Papadakis,et al.  Dynamically consistent optical flow estimation , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[7]  Michael J. Black,et al.  On the Spatial Statistics of Optical Flow , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[8]  Thomas Brox,et al.  High Accuracy Optical Flow Estimation Based on a Theory for Warping , 2004, ECCV.

[9]  M. Dixon,et al.  Location-specific Transition Distributions for Tracking , 2008, 2008 IEEE Workshop on Motion and video Computing.

[10]  Olga Veksler,et al.  Fast approximate energy minimization via graph cuts , 2001, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[11]  Mubarak Shah,et al.  Floor Fields for Tracking in High Density Crowd Scenes , 2008, ECCV.

[12]  Luc Van Gool,et al.  Coupled Detection and Trajectory Estimation for Multi-Object Tracking , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[13]  Ramakant Nevatia,et al.  Segmentation and Tracking of Multiple Humans in Crowded Environments , 2008, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[14]  Xiaofeng Ren,et al.  Local grouping for optical flow , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[15]  Carl E. Rasmussen,et al.  Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.

[16]  W. Eric L. Grimson,et al.  Trajectory analysis and semantic region modeling using a nonparametric Bayesian model , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[17]  Olga Veksler,et al.  Fast Approximate Energy Minimization via Graph Cuts , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[18]  John M. Lee Introduction to Smooth Manifolds , 2002 .

[19]  Dahua Lin,et al.  Learning visual flows: A Lie algebraic approach , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[20]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[21]  Dmitry Chetverikov,et al.  Detecting Regions of Dynamic Texture , 2007, SSVM.