Few-Shot Learning-Based Human Activity Recognition

Few-shot learning is a technique to learn a model with a very small amount of labeled training data by transferring knowledge from relevant tasks. In this paper, we propose a few-shot learning method for wearable sensor based human activity recognition, a technique that seeks high-level human activity knowledge from low-level sensor inputs. Due to the high costs to obtain human generated activity data and the ubiquitous similarities between activity modes, it can be more efficient to borrow information from existing activity recognition models than to collect more data to train a new model from scratch when only a few data are available for model training. The proposed few-shot human activity recognition method leverages a deep learning model for feature extraction and classification while knowledge transfer is performed in the manner of model parameter transfer. In order to alleviate negative transfer, we propose a metric to measure cross-domain class-wise relevance so that knowledge of higher relevance is assigned larger weights during knowledge transfer. Promising results in extensive experiments show the advantages of the proposed approach.

[1]  Jadwiga Indulska,et al.  Adaptive activity learning with dynamically available context , 2016, 2016 IEEE International Conference on Pervasive Computing and Communications (PerCom).

[2]  Xihong Wu,et al.  Real-Time Activity Recognition on Smartphones Using Deep Neural Networks , 2015, 2015 IEEE 12th Intl Conf on Ubiquitous Intelligence and Computing and 2015 IEEE 12th Intl Conf on Autonomic and Trusted Computing and 2015 IEEE 15th Intl Conf on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom).

[3]  Matthew A. Brown,et al.  Low-Shot Learning with Imprinted Weights , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[4]  Pietro Perona,et al.  A Bayesian approach to unsupervised one-shot learning of object categories , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[5]  Oriol Vinyals,et al.  Matching Networks for One Shot Learning , 2016, NIPS.

[6]  Yoshua Bengio,et al.  Deep Learning of Representations: Looking Forward , 2013, SLSP.

[7]  Xiaohui Peng,et al.  Deep Learning for Sensor-based Activity Recognition: A Survey , 2017, Pattern Recognit. Lett..

[8]  Young-Koo Lee,et al.  Semi-Markov conditional random fields for accelerometer-based activity recognition , 2010, Applied Intelligence.

[9]  Didier Stricker,et al.  Introducing a New Benchmarked Dataset for Activity Monitoring , 2012, 2012 16th International Symposium on Wearable Computers.

[10]  Assefaw H. Gebremedhin,et al.  A Signal-Level Transfer Learning Framework for Autonomous Reconfiguration of Wearable Systems , 2020, IEEE Transactions on Mobile Computing.

[11]  Jieping Ye,et al.  Multi-Task Feature Learning Via Efficient l2, 1-Norm Minimization , 2009, UAI.

[12]  Yunhao Liu,et al.  Indoor localization via multi-modal sensing on smartphones , 2016, UbiComp.

[13]  Antonio Torralba,et al.  Transfer Learning by Borrowing Examples for Multiclass Object Detection , 2011, NIPS.

[14]  Takeshi Nishida,et al.  Deep recurrent neural network for mobile human activity recognition with high throughput , 2017, Artificial Life and Robotics.

[15]  Seungjin Choi,et al.  Multi-modal Convolutional Neural Networks for Activity Recognition , 2015, 2015 IEEE International Conference on Systems, Man, and Cybernetics.

[16]  Rajiv V. Dharaskar,et al.  PCA Based Optimal ANN Classifiers for Human Activity Recognition Using Mobile Sensors Data , 2016 .

[17]  Pietro Perona,et al.  A Bayesian approach to unsupervised one-shot learning of object categories , 2003, ICCV 2003.

[18]  Xiaohui Peng,et al.  A Novel Feature Incremental Learning Method for Sensor-Based Activity Recognition , 2019, IEEE Transactions on Knowledge and Data Engineering.

[19]  Bandar Saleh Mouhammed ِAlmaslukh,et al.  An effective deep autoencoder approach for online smartphone-based human activity recognition , 2017 .

[20]  Gregory R. Koch,et al.  Siamese Neural Networks for One-Shot Image Recognition , 2015 .

[21]  Paul M. B. Vitányi,et al.  The Google Similarity Distance , 2004, IEEE Transactions on Knowledge and Data Engineering.

[22]  Richard S. Zemel,et al.  Prototypical Networks for Few-shot Learning , 2017, NIPS.

[23]  Weihua Sheng,et al.  Human daily activity recognition in robot-assisted living using multi-sensor fusion , 2009, 2009 IEEE International Conference on Robotics and Automation.

[24]  Ricardo Chavarriaga,et al.  The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition , 2013, Pattern Recognit. Lett..

[25]  Danna Zhou,et al.  d. , 1934, Microbial pathogenesis.

[26]  Mahesh K. Marina,et al.  Towards multimodal deep learning for activity recognition on mobile devices , 2016, UbiComp Adjunct.

[27]  Wilhelm Stork,et al.  Context-aware mobile health monitoring: Evaluation of different pattern recognition methods for classification of physical activity , 2008, 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

[28]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[29]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[30]  Bernt Schiele,et al.  A tutorial on human activity recognition using body-worn inertial sensors , 2014, CSUR.

[31]  Jesús Favela,et al.  Activity Recognition for the Smart Hospital , 2008, IEEE Intelligent Systems.

[32]  Thomas Plötz,et al.  Deep, Convolutional, and Recurrent Models for Human Activity Recognition Using Wearables , 2016, IJCAI.

[33]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[34]  Luca Antiga,et al.  Automatic differentiation in PyTorch , 2017 .

[35]  Yoshua Bengio,et al.  How transferable are features in deep neural networks? , 2014, NIPS.

[36]  Bo Yu,et al.  Convolutional Neural Networks for human activity recognition using mobile sensors , 2014, 6th International Conference on Mobile Computing, Applications and Services.

[37]  Chunyan Miao,et al.  An Agent-Based Game Platform for Exercising People's Prospective Memory , 2015, 2015 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT).

[38]  Marcus Edel,et al.  Binarized-BLSTM-RNN based Human Activity Recognition , 2016, 2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN).

[39]  Luc Cluitmans,et al.  Advancing from offline to online activity recognition with wearable sensors , 2008, 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

[40]  Sajal K. Das,et al.  A-Wristocracy: Deep learning on wrist-worn sensing for recognition of user complex activities , 2015, 2015 IEEE 12th International Conference on Wearable and Implantable Body Sensor Networks (BSN).

[41]  Tsuyoshi Murata,et al.  {m , 1934, ACML.

[42]  Noah A. Smith,et al.  Transition-Based Dependency Parsing with Stack Long Short-Term Memory , 2015, ACL.

[43]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[44]  Qiang Yang,et al.  A Survey on Transfer Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.

[45]  Guilin Chen,et al.  Human Activity Recognition in a Smart Home Environment with Stacked Denoising Autoencoders , 2016, WAIM Workshops.