Activity Recognition from Mobile Phone using Deep CNN

Achieving better performance has always been an important research target in the field of human activity recognition (HAR) based on mobile phone. The traditional activity recognition method mainly relies on artificial feature extraction, but the artificially selected features are not always effective, which affects the improvement of recognition accuracy. This paper mainly introduces a deep convolutional neural networks (CNN) model for human activity recognition, which can effectively improve the accuracy of human activity recognition. First of all, we manually collected 128-dimensional time domain sequence features from the accelerometer and gyroscope sensor data of the smartphone, and then we use a time domain to space domain transformation algorithm, namely Gramian Angular Fields transform algorithm, to convert these time domain signals into a 128×128 spatial signal of the image, which can take full advantage of the very effective deep learning model in the field of computer vision. Thus we can utilize the powerful feature representation capabilities of deep CNN, and then we construct an 8-layer convolutional neural network model for human activity recognition. Experimental results on UCI HAR dataset confirm the effectiveness of our method, the recognition accuracy are satisfactory and competitive compared with traditional and state of the art methods.

[1]  Davide Anguita,et al.  A Public Domain Dataset for Human Activity Recognition using Smartphones , 2013, ESANN.

[2]  Jun Zhong,et al.  Towards unsupervised physical activity recognition using smartphone accelerometers , 2016, Multimedia Tools and Applications.

[3]  Jesse Hoey,et al.  Sensor-Based Activity Recognition , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[4]  Yeng Chai Soh,et al.  Robust Human Activity Recognition Using Smartphone Sensors via CT-PCA and Online SVM , 2017, IEEE Transactions on Industrial Informatics.

[5]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Wazir Zada Khan,et al.  Mobile Phone Sensing Systems: A Survey , 2013, IEEE Communications Surveys & Tutorials.

[7]  Sung-Bae Cho,et al.  Human activity recognition with smartphone sensors using deep learning neural networks , 2016, Expert Syst. Appl..

[8]  Rémi Ronfard,et al.  A survey of vision-based methods for action representation, segmentation and recognition , 2011, Comput. Vis. Image Underst..

[9]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[10]  Otávio A. B. Penatti,et al.  Human activity recognition from mobile inertial sensors using recurrence plots , 2017, ArXiv.

[11]  Cem Ersoy,et al.  A Review and Taxonomy of Activity Recognition on Mobile Phones , 2013 .

[12]  Hanghang Tong,et al.  Activity recognition with smartphone sensors , 2014 .

[13]  Miguel A. Labrador,et al.  A Survey on Human Activity Recognition using Wearable Sensors , 2013, IEEE Communications Surveys & Tutorials.

[14]  Bernt Schiele,et al.  A tutorial on human activity recognition using body-worn inertial sensors , 2014, CSUR.

[15]  Tim Oates,et al.  Imaging Time-Series to Improve Classification and Imputation , 2015, IJCAI.

[16]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[17]  Dumitru Erhan,et al.  Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Shenghui Zhao,et al.  A Comparative Study on Human Activity Recognition Using Inertial Sensors in a Smartphone , 2016, IEEE Sensors Journal.

[19]  Paul J. M. Havinga,et al.  Complex Human Activity Recognition Using Smartphone and Wrist-Worn Motion Sensors , 2016, Sensors.