Semi-supervised learning for human activity recognition using adversarial autoencoders

SHL recognition challenge 2019 goal is to recognize eight locomotion and transportation (activities) from the inertial sensor data of a smartphone. The dataset contains information from different mobile-phones placement (torso, bag, hips, hand). Participants must provide their predictions based on test data that contains Hand phone sensors information. Only a small amount of Hand phone labeled data exists in the validation data. Train data consists only of torso, bag and hips placed mobile devices. Team DB proposes to apply deep semi-supervised learning. As the base for our model, we have chosen Adversarial Autoencoder (AAE) and employ Convolutional Networks for feature extraction. We prove that semi-supervised learning gives possibility to utilize test unlabeled data during AAE training with small amount of validation labeled data and achieve high model accuracy for Human Activity Recognition task.

[1]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[2]  Sergey Ioffe,et al.  Batch Renormalization: Towards Reducing Minibatch Dependence in Batch-Normalized Models , 2017, NIPS.

[3]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[4]  Yi Zheng,et al.  Time Series Classification Using Multi-Channels Deep Convolutional Neural Networks , 2014, WAIM.

[5]  Nadir Weibel,et al.  Context Recognition In-the-Wild , 2018, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[6]  Kazuya Murao,et al.  Summary of the Sussex-Huawei locomotion-transportation recognition challenge 2019 , 2019, UbiComp/ISWC Adjunct.

[7]  Xiang Wei,et al.  Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect , 2018, ICLR.

[8]  Hristijan Gjoreski,et al.  Benchmark Performance for the Sussex-Huawei Locomotion and Transportation Recognition Challenge 2018 , 2019, Human Activity Sensing.

[9]  Lin Wang,et al.  The University of Sussex-Huawei Locomotion and Transportation Dataset for Multimodal Analytics With Mobile Devices , 2018, IEEE Access.

[10]  Lin Wang,et al.  Summary of the Sussex-Huawei Locomotion-Transportation Recognition Challenge , 2018, UbiComp/ISWC Adjunct.

[11]  Stefan Valentin,et al.  Enabling Reproducible Research in Sensor-Based Transportation Mode Recognition With the Sussex-Huawei Dataset , 2019, IEEE Access.

[12]  Aaron C. Courville,et al.  Improved Training of Wasserstein GANs , 2017, NIPS.

[13]  Masaki Shuzo,et al.  Application of CNN for Human Activity Recognition with FFT Spectrogram of Acceleration and Gyro Sensors , 2018, UbiComp/ISWC Adjunct.

[14]  Wenbo Gong,et al.  Wasserstein Generative Adversarial Network , 2017 .

[15]  Navdeep Jaitly,et al.  Adversarial Autoencoders , 2015, ArXiv.

[16]  Léon Bottou,et al.  Wasserstein Generative Adversarial Networks , 2017, ICML.

[17]  Sudhanshu Mittal,et al.  Semi-supervised Learning for Real-world Object Recognition using Adversarial Autoencoders , 2017 .

[18]  Chi Harold Liu,et al.  A survey of context-aware middleware designs for human activity recognition , 2014, IEEE Communications Magazine.

[19]  Daniel Cremers,et al.  Clustering with Deep Learning: Taxonomy and New Methods , 2018, ArXiv.

[20]  Xiang Li,et al.  Understanding the Disharmony Between Dropout and Batch Normalization by Variance Shift , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).