Cross-Subject Transfer Learning in Human Activity Recognition Systems using Generative Adversarial Networks

Abstract Application of intelligent systems especially in smart homes and health-related topics has been drawing more attention in the last decades. Training Human Activity Recognition (HAR) models - as a major module- requires a fair amount of labeled data. Despite training with large datasets, most of the existing models will face a dramatic performance drop when they are tested against unseen data from new users. Moreover, recording enough data for each new user is non-viable due to the limitations and challenges of working with human users. Transfer learning techniques aim to transfer the knowledge which has been learned from the source domain (subject) to the target domain in order to decrease the models’ performance loss in the target domain. This paper presents a novel method of adversarial knowledge transfer named SA-GAN stands for Subject Adaptor GAN, which utilizes the Generative Adversarial Network framework to perform cross-subject transfer learning in the domain of wearable sensor-based Human Activity Recognition. SA-GAN outperformed other state-of-the-art methods in more than 66% of experiments and showed the second-best performance in the remaining 25% of experiments. In some cases, it reached up to 90% of the accuracy which can be obtained by supervised training over the same domain data.

[1]  Martin L. Griss,et al.  Towards zero-shot learning for human activity recognition using semantic attribute sequence model , 2013, UbiComp.

[2]  Seungjin Choi,et al.  Multi-modal Convolutional Neural Networks for Activity Recognition , 2015, 2015 IEEE International Conference on Systems, Man, and Cybernetics.

[3]  拓海 杉山,et al.  “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .

[4]  Wojciech Zaremba,et al.  Improved Techniques for Training GANs , 2016, NIPS.

[5]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[6]  Roozbeh Jafari,et al.  Transferring Activity Recognition Models for New Wearable Sensors with Deep Generative Domain Adaptation , 2019, 2019 18th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN).

[7]  Thomas Plötz,et al.  Ensembles of Deep LSTM Learners for Activity Recognition using Wearables , 2017, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[8]  Yuqing Chen,et al.  A Deep Learning Approach to Human Activity Recognition Based on Single Accelerometer , 2015, 2015 IEEE International Conference on Systems, Man, and Cybernetics.

[9]  Philip S. Yu,et al.  Stratified Transfer Learning for Cross-domain Activity Recognition , 2017, 2018 IEEE International Conference on Pervasive Computing and Communications (PerCom).

[10]  Luciano Bononi,et al.  By train or by car? Detecting the user's motion type through smartphone sensors data , 2012, 2012 IFIP Wireless Days.

[11]  최승진,et al.  Multi-modal convolutional neural networks for activity recognition , 2015 .

[12]  Qiang Yang,et al.  A Survey on Transfer Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.

[13]  Qiang Yang,et al.  Cross-domain activity recognition via transfer learning , 2011, Pervasive Mob. Comput..

[14]  Alexander G. Schwing,et al.  Generative Modeling Using the Sliced Wasserstein Distance , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[15]  Alexei A. Efros,et al.  Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[16]  Léon Bottou,et al.  Wasserstein GAN , 2017, ArXiv.

[17]  Bo Ding,et al.  Unsupervised Feature Learning for Human Activity Recognition Using Smartphone Sensors , 2014, MIKE.

[18]  Zhaozheng Yin,et al.  Human Activity Recognition Using Wearable Sensors by Deep Convolutional Neural Networks , 2015, ACM Multimedia.

[19]  Nirmalya Roy,et al.  TransAct: Transfer learning enabled activity recognition , 2017, 2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops).

[20]  Yuan Shi,et al.  Geodesic flow kernel for unsupervised domain adaptation , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[21]  Assefaw H. Gebremedhin,et al.  Personalized Human Activity Recognition using Wearables: A Manifold Learning-based Knowledge Transfer , 2018, 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC).

[22]  Ricardo Chavarriaga,et al.  The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition , 2013, Pattern Recognit. Lett..

[23]  Ping Tan,et al.  DualGAN: Unsupervised Dual Learning for Image-to-Image Translation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[24]  Sajal K. Das,et al.  A-Wristocracy: Deep learning on wrist-worn sensing for recognition of user complex activities , 2015, 2015 IEEE 12th International Conference on Wearable and Implantable Body Sensor Networks (BSN).

[25]  Takeshi Nishida,et al.  Deep recurrent neural network for mobile human activity recognition with high throughput , 2017, Artificial Life and Robotics.

[26]  François Laviolette,et al.  Domain-Adversarial Training of Neural Networks , 2015, J. Mach. Learn. Res..

[27]  Xiaohui Peng,et al.  Deep Learning for Sensor-based Activity Recognition: A Survey , 2017, Pattern Recognit. Lett..

[28]  Angelo M. Sabatini,et al.  Machine Learning Methods for Classifying Human Physical Activity from On-Body Accelerometers , 2010, Sensors.

[29]  Liping Jin,et al.  Human Activity Recognition from Sensor-Based Large-Scale Continuous Monitoring of Parkinson’s Disease Patients , 2017, 2017 IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE).

[30]  Ying Wah Teh,et al.  Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges , 2018, Expert Syst. Appl..

[31]  Qiang Yang,et al.  Quantifying information and contradiction in propositional logic through test actions , 2009, IJCAI.

[32]  Xiaoli Li,et al.  Deep Convolutional Neural Networks on Multichannel Time Series for Human Activity Recognition , 2015, IJCAI.

[33]  Miguel A. Labrador,et al.  A Survey on Human Activity Recognition using Wearable Sensors , 2013, IEEE Communications Surveys & Tutorials.

[34]  Marco Cuturi,et al.  On Wasserstein Two-Sample Testing and Related Families of Nonparametric Tests , 2015, Entropy.

[35]  Sung-Bae Cho,et al.  Human activity recognition with smartphone sensors using deep learning neural networks , 2016, Expert Syst. Appl..

[36]  Dumitru Erhan,et al.  Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[37]  Fei-Fei Li,et al.  Label Efficient Learning of Transferable Representations acrosss Domains and Tasks , 2017, NIPS.

[38]  Diane J. Cook,et al.  Transfer learning for activity recognition: a survey , 2013, Knowledge and Information Systems.

[39]  Michael I. Jordan,et al.  Learning Transferable Features with Deep Adaptation Networks , 2015, ICML.

[40]  Kai Tang,et al.  Kernel fusion based extreme learning machine for cross-location activity recognition , 2017, Inf. Fusion.

[41]  Ivor W. Tsang,et al.  Domain Adaptation via Transfer Component Analysis , 2009, IEEE Transactions on Neural Networks.

[42]  Gordon Cheng,et al.  Active Prior Tactile Knowledge Transfer for Learning Tactual Properties of New Objects , 2018, Sensors.

[43]  Trevor Darrell,et al.  Adversarial Discriminative Domain Adaptation , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[44]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).