Analyzing Activities in Videos Using Latent Dirichlet Allocation and Granger Causality

We propose an unsupervised method for analyzing motion activities from videos. Our method combines Latent Dirichlet Allocation with Granger Causality to discover the main motions composing the activity as well as to detect how these motions relate to one another in time and space. We tested our method on synthetic and real-world datasets. Our method compares favorably with state-of-the-art methods.

[1]  W. Eric L. Grimson,et al.  Unsupervised Activity Perception in Crowded and Complicated Scenes Using Hierarchical Bayesian Models , 2008, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  James M. Rehg,et al.  Temporal causality for the analysis of visual events , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[3]  Shaogang Gong,et al.  Video Behaviour Mining Using a Dynamic Topic Model , 2011, International Journal of Computer Vision.

[4]  K. Kendrick,et al.  Partial Granger causality—Eliminating exogenous inputs and latent variables , 2008, Journal of Neuroscience Methods.

[5]  C. Granger Investigating Causal Relations by Econometric Models and Cross-Spectral Methods , 1969 .

[6]  Shuang Wu,et al.  Video Sensor-Based Complex Scene Analysis with Granger Causality , 2013, Sensors.

[7]  Mubarak Shah,et al.  Probabilistic Modeling of Scene Dynamics for Applications in Visual Surveillance , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[8]  Michael I. Jordan,et al.  Latent Dirichlet Allocation , 2001, J. Mach. Learn. Res..

[9]  Luc Van Gool,et al.  What's going on? Discovering spatio-temporal dependencies in dynamic scenes , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[10]  Shaogang Gong,et al.  Global Behaviour Inference using Probabilistic Latent Semantic Analysis , 2008, BMVC.