A Deep Learning Method for Complex Human Activity Recognition Using Virtual Wearable Sensors

Sensor-based human activity recognition (HAR) is now a research hotspot in multiple application areas. With the rise of smart wearable devices equipped with inertial measurement units (IMUs), researchers begin to utilize IMU data for HAR. By employing machine learning algorithms, early IMU-based research for HAR can achieve accurate classification results on traditional classical HAR datasets, containing only simple and repetitive daily activities. However, these datasets rarely display a rich diversity of information in real-scene. In this paper, we propose a novel method based on deep learning for complex HAR in the real-scene. Specially, in the off-line training stage, the AMASS dataset, containing abundant human poses and virtual IMU data, is innovatively adopted for enhancing the variety and diversity. Moreover, a deep convolutional neural network with an unsupervised penalty is proposed to automatically extract the features of AMASS and improve the robustness. In the on-line testing stage, by leveraging advantages of the transfer learning, we obtain the final result by fine-tuning the partial neural network (optimizing the parameters in the fully-connected layers) using the real IMU data. The experimental results show that the proposed method can surprisingly converge in a few iterations and achieve an accuracy of 91.15% on a real IMU dataset, demonstrating the efficiency and effectiveness of the proposed method.

[1]  Gary M. Weiss,et al.  Design considerations for the WISDM smart phone-based sensor mining architecture , 2011, SensorKDD '11.

[2]  João Gama,et al.  Human Activity Recognition Using Inertial Sensors in a Smartphone: An Overview , 2019, Sensors.

[3]  Yoshua Bengio,et al.  Why Does Unsupervised Pre-training Help Deep Learning? , 2010, AISTATS.

[4]  Shuang Wang,et al.  A Review on Human Activity Recognition Using Vision-Based Method , 2017, Journal of healthcare engineering.

[5]  Husheng Li,et al.  LEMO: Learn to Equalize for MIMO-OFDM Systems with Low-Resolution ADCs , 2019, 2020 IEEE 20th International Conference on Communication Technology (ICCT).

[6]  Michael J. Black,et al.  Deep inertial poser , 2018, ACM Trans. Graph..

[7]  Didier Stricker,et al.  Introducing a New Benchmarked Dataset for Activity Monitoring , 2012, 2012 16th International Symposium on Wearable Computers.

[8]  Yuwei Chen,et al.  Using LS-SVM Based Motion Recognition for Smartphone Indoor Wireless Positioning , 2012, Sensors.

[9]  Michael J. Black,et al.  SMPL: A Skinned Multi-Person Linear Model , 2023 .

[10]  Massimiliano Pontil,et al.  Excess risk bounds for multitask learning with trace norm regularization , 2012, COLT.

[11]  Sung-Bae Cho,et al.  Human activity recognition with smartphone sensors using deep learning neural networks , 2016, Expert Syst. Appl..

[12]  N. Troje Decomposing biological motion: a framework for analysis and synthesis of human gait patterns. , 2002, Journal of vision.

[13]  Fulvio Mastrogiovanni,et al.  Wearable Inertial Sensors: Applications, Challenges, and Public Test Benches , 2015, IEEE Robotics & Automation Magazine.

[14]  Yuwei Chen,et al.  Human Behavior Cognition Using Smartphone Sensors , 2013, Sensors.

[15]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[16]  Davide Anguita,et al.  A Public Domain Dataset for Human Activity Recognition using Smartphones , 2013, ESANN.

[17]  Daniel Roggen,et al.  Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition , 2016, Sensors.

[18]  Nikolaus F. Troje,et al.  AMASS: Archive of Motion Capture As Surface Shapes , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).