Deep Triplet Networks with Attention for Sensor-based Human Activity Recognition

One of the most significant challenges in Human Activity Recognition using wearable devices is inter-class similarities and subject heterogeneity. These problems lead to the difficulties in constructing robust feature representations that might negatively affect the quality of recognition. This study, for the first time, applies deep triplet networks with various triplet loss functions and mining methods to the Human Activity Recognition task. Moreover, we introduce a novel method for constructing hard triplets by exploiting similarities between subjects performing the same activities using the concept of Hierarchical Triplet Loss. Our deep triplet models are based on the recent state-of-the-art LSTM networks with two attention mechanisms. The extensive experiments conducted in this paper identify important hyperparameters and settings for training deep metric learning models on widely-used open-source Human Activity Recognition datasets. The comparison of the proposed models against the recent benchmark models shows that deep metric learning approach has the potential to improve the quality of recognition. Specifically, at least one of the implemented triplet networks shows the state-of-the-art results for each dataset used in this study, namely PAMAP2, USC-HAD and MHEALTH. Another positive effect of applying deep triplet networks and especially the proposed sampling algorithm is that feature representations are less affected by inter-class similarities and subject heterogeneity issues.

[1]  Hassan Ghasemzadeh,et al.  Personalized Human Activity Recognition Using Convolutional Neural Networks , 2018, AAAI.

[2]  Walid Gomaa,et al.  Robust Human Activity Recognition based on Deep Metric Learning , 2019, ICINCO.

[3]  Mi Zhang,et al.  USC-HAD: a daily activity dataset for ubiquitous activity recognition using wearable sensors , 2012, UbiComp.

[4]  Yann LeCun,et al.  Dimensionality Reduction by Learning an Invariant Mapping , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[5]  Ignacio Rojas,et al.  Design, implementation and validation of a novel open framework for agile development of mobile health applications , 2015, BioMedical Engineering OnLine.

[6]  M Ashraful Amin,et al.  Human Activity Recognition from Wearable Sensor Data Using Self-Attention , 2020, ECAI.

[7]  Yu Zhao,et al.  Deep Residual Bidir-LSTM for Human Activity Recognition Using Wearable Sensors , 2017, Mathematical Problems in Engineering.

[8]  Daniel Roggen,et al.  Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition , 2016, Sensors.

[9]  Lina Yao,et al.  Deep Learning for Sensor-based Human Activity Recognition , 2021, ACM Comput. Surv..

[10]  Héctor Pomares,et al.  mHealthDroid: A Novel Framework for Agile Development of Mobile Health Applications , 2014, IWAAL.

[11]  M. Srivastava,et al.  SenseHAR: a robust virtual activity sensor for smartphones and wearables , 2019, SenSys.

[12]  Thomas Plötz,et al.  Deep, Convolutional, and Recurrent Models for Human Activity Recognition Using Wearables , 2016, IJCAI.

[13]  Cheng Xu,et al.  InnoHAR: A Deep Neural Network for Complex Human Activity Recognition , 2019, IEEE Access.

[14]  Xiaoli Li,et al.  Deep Convolutional Neural Networks on Multichannel Time Series for Human Activity Recognition , 2015, IJCAI.

[15]  Andrey Ignatov,et al.  Real-time human activity recognition from accelerometer data using Convolutional Neural Networks , 2018, Appl. Soft Comput..

[16]  Elnaz Soleimani,et al.  Cross-Subject Transfer Learning in Human Activity Recognition Systems using Generative Adversarial Networks , 2019, Neurocomputing.

[17]  Bernt Schiele,et al.  A tutorial on human activity recognition using body-worn inertial sensors , 2014, CSUR.

[18]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[19]  Koushik Maharatna,et al.  Rehab-Net: Deep Learning Framework for Arm Movement Classification Using Wearable Sensors for Stroke Rehabilitation , 2019, IEEE Transactions on Biomedical Engineering.

[20]  Lucas Beyer,et al.  In Defense of the Triplet Loss for Person Re-Identification , 2017, ArXiv.

[21]  Geoffrey E. Hinton,et al.  Visualizing Data using t-SNE , 2008 .

[22]  Zhaozheng Yin,et al.  Human Activity Recognition Using Wearable Sensors by Deep Convolutional Neural Networks , 2015, ACM Multimedia.

[23]  Mario Kusek,et al.  Activity Detection in Smart Home Environment , 2016, KES.

[24]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[25]  Didier Stricker,et al.  Introducing a New Benchmarked Dataset for Activity Monitoring , 2012, 2012 16th International Symposium on Wearable Computers.

[26]  Ming Zeng,et al.  Understanding and improving recurrent networks for human activity recognition by continuous attention , 2018, UbiComp.

[27]  Kaiqi Huang,et al.  Beyond Triplet Loss: A Deep Quadruplet Network for Person Re-identification , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[28]  James Philbin,et al.  FaceNet: A unified embedding for face recognition and clustering , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Weilin Huang,et al.  Deep Metric Learning with Hierarchical Triplet Loss , 2018, ECCV.

[30]  Lina Yao,et al.  Distributionally Robust Semi-Supervised Learning for People-Centric Sensing , 2018, AAAI.

[31]  Wenzhong Li,et al.  AttnSense: Multi-level Attention Mechanism For Multimodal Human Activity Recognition , 2019, IJCAI.

[32]  Gernot A. Fink,et al.  Deep Neural Network based Human Activity Recognition for the Order Picking Process , 2017, iWOAR.

[33]  Silvio Savarese,et al.  Deep Metric Learning via Lifted Structured Feature Embedding , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[34]  Taoran Sheng,et al.  Siamese Networks for Weakly Supervised Human Activity Recognition , 2019, 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC).