There has been significant amount of research work on human activity classification relying either on Inertial Measurement Unit (IMU) data or data from static cameras providing a third-person view. Using only IMU data limits the variety and complexity of the activities that can be detected. For instance, the sitting activity can be detected by IMU data, but it cannot be determined whether the subject has sat on a chair or a sofa, or where the subject is. To perform fine-grained activity classification from egocentric videos, and to distinguish between activities that cannot be differentiated by only IMU data, we present an autonomous and robust method using data from both ego-vision cameras and IMUs. In contrast to convolutional neural network-based approaches, we propose to employ capsule networks to obtain features from egocentric video data. Moreover, Convolutional Long Short Term Memory framework is employed both on egocentric videos and IMU data to capture temporal aspect of actions. We also propose a genetic algorithm-based approach to autonomously and systematically set various network parameters, rather than using manual settings. Experiments have been performed to perform 9- and 26-label activity classification, and the proposed method, using autonomously set network parameters, has provided very promising results, achieving overall accuracies of 86.6\% and 77.2\%, respectively. The proposed approach combining both modalities also provides increased accuracy compared to using only egovision data and only IMU data.
[1]
Martial Hebert,et al.
Temporal segmentation and activity classification from first-person sensing
,
2009,
2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.
[2]
Jessica K. Hodgins,et al.
Guide to the Carnegie Mellon University Multimodal Activity (CMU-MMAC) Database
,
2008
.
[3]
Dit-Yan Yeung,et al.
Deep Learning for Precipitation Nowcasting: A Benchmark and A New Model
,
2017,
NIPS.
[4]
Ali Farhadi,et al.
Action Recognition in the Presence of One Egocentric and Multiple Static Cameras
,
2014,
ACCV.
[5]
Geoffrey E. Hinton,et al.
Dynamic Routing Between Capsules
,
2017,
NIPS.
[6]
Andrew Zisserman,et al.
Very Deep Convolutional Networks for Large-Scale Image Recognition
,
2014,
ICLR.