Ensemble Classifier Managing Uncertainty in Accelerometer Data within Human Activity Recognition Systems

Human activity recognition (HAR) is a field of study that aims to recognize activities from data acquired by video or wearable sensors. The biggest health study in Norway, HUNT, has recently ended it’s fourth study where 38 756 participants have recorded activity data while wearing three-axis accelerometer on their thigh and back. HAR systems often require all sensors to be operative and attached to the participant at all times, and shows weaknesses when performing activity recognition, as a lot of misclassifications occur due to sensors lying still after being detached from the subject’s body during activity recording. To make HAR systems more robust against this issue, this thesis researches on a new type of ensemble classifier where a meta classifier predicts sensor no-wear time, eliminates faulty sensor streams and dynamically adjust the LSTM-RNN sensor position specific classification models used, depending on the data available. The developed meta classifier is trained on a new ”Sensor No-Wear Time” dataset that consists of real-world data, and is able to predict sensor no-wear time with 97.2% accuracy and shows promising results towards making more valid contributions towards public health research, as it eliminates up to several days of misclassifications where sensors have been detached. Research done in this thesis shows that individual models for thigh and back are struggling to classify certain static activities. A model for both sensors combined is therefore the best option for activity classification as it achieves an accuracy of 85.1% compared to the existing HAR system’s 76.5%, and outperforms individual models when classifying static activities. Storing classification results for all participants in HUNT requires huge amounts of storage space, and Feather is proving to be the file format that is best suited for storing activity classification results, as the result file size for each participant is reduced from 2.5 GB to 941 KB with a new compression algorithm. This results in a total reduction of 99.96%, as necessary storage space is reduced from 96.89 TB to 0.036469396 TB for all HUNT4 participants.

[1]  L. Mackay,et al.  A Dual-Accelerometer System for Classifying Physical Activity in Children and Adults , 2018, Medicine and science in sports and exercise.

[2]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[3]  Akshay Hebbar,et al.  Augmented intelligence: Enhancing human capabilities , 2017, 2017 Third International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN).

[4]  Keiichiro Hoashi,et al.  Deep feature learning and selection for activity recognition , 2018, SAC.

[5]  Sander Dieleman,et al.  Beyond Temporal Pooling: Recurrence and Temporal Convolutions for Gesture Recognition in Video , 2015, International Journal of Computer Vision.

[6]  Hwee Pink Tan,et al.  Deep Activity Recognition Models with Triaxial Accelerometers , 2015, AAAI Workshop: Artificial Intelligence Applied to Assistive Technologies and Smart Environments.

[7]  Øyvind Reinsve Data Analytics for HUNT: Recognition of Physical Activity on Sensor Data Streams , 2018 .

[8]  Daniel Roggen,et al.  Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition , 2016, Sensors.

[9]  Miguel A. Labrador,et al.  A Survey on Human Activity Recognition using Wearable Sensors , 2013, IEEE Communications Surveys & Tutorials.

[10]  Nikhil Ketkar Recurrent Neural Networks , 2017 .

[11]  Joo-Hwee Lim,et al.  Multimodal Multi-Stream Deep Learning for Egocentric Activity Recognition , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[12]  B. J. Oates,et al.  Researching Information Systems and Computing , 2005 .

[13]  Ahmad Waqas,et al.  Deep learning fusion conceptual frameworks for complex human activity recognition using mobile and wearable sensors , 2018, 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET).

[14]  Wen-Hui Chen,et al.  LSTM-RNNs combined with scene information for human activity recognition , 2017, 2017 IEEE 19th International Conference on e-Health Networking, Applications and Services (Healthcom).

[15]  Bernt Schiele,et al.  A tutorial on human activity recognition using body-worn inertial sensors , 2014, CSUR.

[16]  Eirik Vågeskar Activity Recognition for Stroke Patients , 2017 .

[17]  Richard Berg,et al.  Sensitivity and specificity. , 2005, Clinical medicine & research.

[18]  Christian Wolf,et al.  Sequential Deep Learning for Human Action Recognition , 2011, HBU.