Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning

In this paper, we perform a systematic study about the on-body sensor positioning and data acquisition details for Human Activity Recognition (HAR) systems. We build a testbed that consists of eight body-worn Inertial Measurement Units (IMU) sensors and an Android mobile device for activity data collection. We develop a Long Short-Term Memory (LSTM) network framework to support training of a deep learning model on human activity data, which is acquired in both real-world and controlled environments. From the experiment results, we identify that activity data with sampling rate as low as 10 Hz from four sensors at both sides of wrists, right ankle, and waist is sufficient in recognizing Activities of Daily Living (ADLs) including eating and driving activity. We adopt a two-level ensemble model to combine class-probabilities of multiple sensor modalities, and demonstrate that a classifier-level sensor fusion technique can improve the classification performance. By analyzing the accuracy of each sensor on different types of activity, we elaborate custom weights for multimodal sensor fusion that reflect the characteristic of individual activities.

[1]  Zhaozheng Yin,et al.  Human Activity Recognition Using Wearable Sensors by Deep Convolutional Neural Networks , 2015, ACM Multimedia.

[2]  Nadir Weibel,et al.  ExtraSensory App: Data Collection In-the-Wild with Rich User Interface to Self-Report Behavior , 2018, CHI.

[3]  Kimiaki Shirahama,et al.  Comparison of Feature Learning Methods for Human Activity Recognition Using Wearable Sensors , 2018, Sensors.

[4]  Kyuchang Kang,et al.  Extensible Hierarchical Method of Detecting Interactive Actions for Video Understanding , 2017 .

[5]  Guang-Zhong Yang,et al.  Sensor Positioning for Activity Recognition Using Wearable Accelerometers , 2011, IEEE Transactions on Biomedical Circuits and Systems.

[6]  Gierad Laput,et al.  Synthetic Sensors: Towards General-Purpose Sensing , 2017, CHI.

[7]  Xiaodong Cui,et al.  Data Augmentation for Deep Neural Network Acoustic Modeling , 2015, IEEE/ACM Transactions on Audio, Speech, and Language Processing.

[8]  Chin-Boon Chng,et al.  In situ spatial AR surgical planning using projector-Kinect system , 2013, SoICT.

[9]  M. Granat,et al.  Step accumulation per minute epoch is not the same as cadence for free-living adults. , 2013, Medicine and science in sports and exercise.

[10]  Daniela Micucci,et al.  UniMiB SHAR: a new dataset for human activity recognition using acceleration data from smartphones , 2016, ArXiv.

[11]  Ricardo Chavarriaga,et al.  The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition , 2013, Pattern Recognit. Lett..

[12]  Dianhui Chu,et al.  Empirical Study and Improvement on Deep Transfer Learning for Human Activity Recognition , 2018, Sensors.

[13]  Tullio Vernazza,et al.  Analysis of human behavior recognition algorithms based on acceleration data , 2013, 2013 IEEE International Conference on Robotics and Automation.

[14]  Yangho Ji,et al.  Human‐like sign‐language learning method using deep learning , 2018, ETRI Journal.

[15]  Ling Chen,et al.  Complex Activity Recognition Using Acceleration, Vital Sign, and Location Data , 2019, IEEE Transactions on Mobile Computing.

[16]  Mahesh K. Marina,et al.  Towards multimodal deep learning for activity recognition on mobile devices , 2016, UbiComp Adjunct.

[17]  Tim Dallas,et al.  Feature Selection and Activity Recognition System Using a Single Triaxial Accelerometer , 2014, IEEE Transactions on Biomedical Engineering.

[18]  Kai Tang,et al.  Kernel fusion based extreme learning machine for cross-location activity recognition , 2017, Inf. Fusion.

[19]  Pavlos Protopapas,et al.  Optimizing the Multiclass F-Measure via Biconcave Programming , 2016, 2016 IEEE 16th International Conference on Data Mining (ICDM).

[20]  Anandhakumar Palanisamy,et al.  Vector space based augmented structural kinematic feature descriptor for human activity recognition in videos , 2018, ETRI Journal.

[21]  Yuqing Chen,et al.  A Deep Learning Approach to Human Activity Recognition Based on Single Accelerometer , 2015, 2015 IEEE International Conference on Systems, Man, and Cybernetics.

[22]  Cordelia Schmid,et al.  MoCap-guided Data Augmentation for 3D Pose Estimation in the Wild , 2016, NIPS.

[23]  Jae-Young Pyun,et al.  Deep Recurrent Neural Networks for Human Activity Recognition , 2017, Sensors.

[24]  Marc Pollefeys,et al.  Joint Camera Pose Estimation and 3D Human Pose Estimation in a Multi-camera Setup , 2014, ACCV.

[25]  Davide Anguita,et al.  A Public Domain Dataset for Human Activity Recognition using Smartphones , 2013, ESANN.

[26]  Weihua Sheng,et al.  Wearable Sensor-Based Behavioral Anomaly Detection in Smart Assisted Living Systems , 2015, IEEE Transactions on Automation Science and Engineering.

[27]  Daniel Roggen,et al.  Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition , 2016, Sensors.

[28]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[29]  Daijin Kim,et al.  Robust human activity recognition from depth video using spatiotemporal multi-fused features , 2017, Pattern Recognit..

[30]  Basel Kikhia,et al.  Optimal Placement of Accelerometers for the Detection of Everyday Activities , 2013, Sensors.

[31]  Carmen C. Y. Poon,et al.  Unobtrusive Sensing and Wearable Devices for Health Informatics , 2014, IEEE Transactions on Biomedical Engineering.

[32]  Ling Chen,et al.  Wearable sensor based multimodal human activity recognition exploiting the diversity of classifier ensemble , 2016, UbiComp.

[33]  Eric T. Matson,et al.  Special issue on smart interactions in cyber-physical systems: Humans, agents, robots, machines, and sensors , 2018 .

[34]  Dana Kulic,et al.  Data augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural networks , 2017, ICMI.

[35]  Gustavo Carneiro,et al.  A Bayesian Data Augmentation Approach for Learning Deep Models , 2017, NIPS.

[36]  Seongmin Baek,et al.  Motion Capture of the Human Body Using Multiple Depth Sensors , 2017 .

[37]  Mohammad Reza Kangavari,et al.  Comprehensive architecture for intelligent adaptive interface in the field of single‐human multiple‐robot interaction , 2018 .

[38]  Gert R. G. Lanckriet,et al.  Recognizing Detailed Human Context in the Wild from Smartphones and Smartwatches , 2016, IEEE Pervasive Computing.

[39]  Hyun-Tae Jeong,et al.  Sensor Positioning and Data Acquisition for Activity Recognition using Deep Learning , 2018, 2018 International Conference on Information and Communication Technology Convergence (ICTC).

[40]  Shuicheng Yan,et al.  Body Surface Context: A New Robust Feature for Action Recognition From Depth Videos , 2014, IEEE Transactions on Circuits and Systems for Video Technology.

[41]  Binh P. Nguyen,et al.  Robust Biometric Recognition From Palm Depth Images for Gloved Hands , 2015, IEEE Transactions on Human-Machine Systems.

[42]  Pietro Liò,et al.  Using Deep Data Augmentation Training to Address Software and Hardware Heterogeneities in Wearable and Smartphone Sensing Devices , 2018, 2018 17th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN).

[43]  Daijin Kim,et al.  A Depth Video Sensor-Based Life-Logging Human Activity Recognition System for Elderly Care in Smart Indoor Environments , 2014, Sensors.

[44]  Guy Lapalme,et al.  A systematic analysis of performance measures for classification tasks , 2009, Inf. Process. Manag..

[45]  Didier Stricker,et al.  Introducing a New Benchmarked Dataset for Activity Monitoring , 2012, 2012 16th International Symposium on Wearable Computers.

[46]  Xiaohui Peng,et al.  Deep Learning for Sensor-based Activity Recognition: A Survey , 2017, Pattern Recognit. Lett..