The ability to understand what humans are doing is crucial for any intelligent system to autonomously support human daily activities. Technologies to enable such ability, however, are still undeveloped due to the many challenges in human activity analysis. Among them are the difficulties in extracting human poses and motions from raw sensor data, either recorded from visual sensor or wearable sensor and the need to recognize activities not seen before using unsupervised learning. Furthermore, human activity analysis usually requires expensive sensors or sensing environment. With the availability of low-cost RGBD (RGB-depth) sensor, the new form of data can provide human posture data with high degree of confidence. In this paper, we present our approach to extract features directly from such data (joint positions) based on human range of movement and the results of tests performed to check their effectiveness to distinguish sixteen (16) example activities are reported. Simple unsupervised learning, K-means clustering was used to evaluate the effectiveness of the features. The results indicate that the features based on range of movement significantly improved clustering performance.
[1]
V. M. Zat︠s︡iorskiĭ.
Kinematics of human motion
,
1998
.
[2]
Diane J. Cook,et al.
Human Activity Recognition and Pattern Discovery
,
2010,
IEEE Pervasive Computing.
[3]
Nassir Navab,et al.
Manifold Learning for ToF-based Human Body Tracking and Activity Recognition
,
2010,
BMVC.
[4]
Bernt Schiele,et al.
Discovery of activity patterns using topic models
,
2008
.
[5]
Bart Selman,et al.
Human Activity Detection from RGBD Images
,
2011,
Plan, Activity, and Intent Recognition.
[6]
Reinhard Koch,et al.
Time-of-Flight Sensors in Computer Graphics
,
2009,
Eurographics.
[7]
Bernt Schiele,et al.
Weakly Supervised Recognition of Daily Life Activities with Wearable Sensors
,
2011,
IEEE Transactions on Pattern Analysis and Machine Intelligence.
[8]
J.K. Aggarwal,et al.
Human activity analysis
,
2011,
ACM Comput. Surv..