Action Recognition in aWearable Assistance System

Enabling artificial systems to recognize human actions is a requisite to develop intelligent assistance systems that are able to instruct and supervise users in accomplishing tasks. In order to enable an assistance system to be wearable, head-mounted cameras allow to perceive a scene visually from a user's perspective. But realizing action recognition without any static sensors causes special challenges. The movement of the camera is directly related to the user's head motion and not controlled by the system. In this paper we present how a trajectory-based action recognition can be combined with object recognition, visual tracking, and a background motion compensation to be applicable in such a wearable assistance system. The suitability of our approach is proved by user studies in an object manipulation scenario

[1]  Pedro Ribeiro,et al.  Human Activity Recognition from Video: modeling, feature selection and classification architecture , 2005 .

[2]  Vladimir Pavlovic,et al.  Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  Michael Isard,et al.  A mixed-state condensation tracker with automatic model-switching , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[4]  Joachim Denzler,et al.  A Comparative Evaluation of Template and Histogram Based 2D Tracking Algorithms , 2005, DAGM-Symposium.

[5]  H. Ritter,et al.  Interactive online learning , 2007, Pattern Recognition and Image Analysis.

[6]  Reinhard Koch,et al.  Robust Monocular Detection of Independent Motion by a Moving Observer , 2004, IWCM.

[7]  Jannik Fritsch,et al.  Combining sensory and symbolic data for manipulative gesture recognition , 2004, Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004..

[8]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[9]  Heinrich Niemann,et al.  Efficient Feature Tracking for Long Video Sequences , 2004, DAGM-Symposium.

[10]  Michael J. Black,et al.  A Probabilistic Framework for Matching Temporal Trajectories: CONDENSATION-Based Recognition of Gestures and Expressions , 1998, ECCV.

[11]  Dorin Comaniciu,et al.  Kernel-Based Object Tracking , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[12]  Sven Wachsmuth,et al.  Integration and Coordination in a Cognitive Vision System , 2006, Fourth IEEE International Conference on Computer Vision Systems (ICVS'06).

[13]  P. Rousseeuw Least Median of Squares Regression , 1984 .

[14]  Yukie Nagai,et al.  Learning to comprehend deictic gestures in robots and human infants , 2005, ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005..

[15]  Carlo Tomasi,et al.  Comparison of approaches to egomotion computation , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[16]  C. Grassl,et al.  Efficient hyperplane tracking by intelligent region selection , 2004, 6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004..