Extrapolating Learned Manifolds for Human Activity Recognition

The problem of human activity recognition via visual stimuli can be approached using manifold learning, since the silhouette (binary) images of a person undergoing a smooth motion can be represented as a manifold in the image space. While manifold learning methods allow the characterization of the activity manifolds, performing activity recognition requires distinguishing between manifolds. This invariably involves the extrapolation of learned activity manifolds to new silhouettes -a task that is not fully addressed in the literature. This paper investigates and compares methods for the extrapolation of learned manifolds within the context of activity recognition. Also, the problem of obtaining dense samples for learning human silhouette manifolds is addressed.

[1]  A. Elgammal,et al.  Inferring 3D body pose from silhouettes using activity manifold learning , 2004, CVPR 2004.

[2]  Nicolas Le Roux,et al.  Learning Eigenfunctions Links Spectral Embedding and Kernel PCA , 2004, Neural Computation.

[3]  Tieniu Tan,et al.  A survey on visual surveillance of object motion and behaviors , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[4]  Hanqing Lu,et al.  Neural Network Modeling of Spectral Embedding , 2006, BMVC.

[5]  Ronen Basri,et al.  Actions as space-time shapes , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[6]  Gian Antonio Mian,et al.  Trademark shapes description by string-matching techniques , 1994, Pattern Recognit..

[7]  Kilian Q. Weinberger,et al.  Spectral Methods for Dimensionality Reduction , 2006, Semi-Supervised Learning.