Unsupervised Human Activity Representation Learning with Multi-task Deep Clustering

Human activity recognition (HAR) based on sensing data from wearable and mobile devices has become an active research area in ubiquitous computing, and it envisions a wide range of application scenarios in mobile social networking, environmental context sensing, health and well-being monitoring, etc. However, activity recognition based on manually annotated sensing data is manpower-expensive, time-consuming, and privacy-sensitive, which prevents HAR systems from being really deployed in scale. In this paper, we address the problem of unsupervised human activity recognition, which infers activities from unlabeled datasets without the need of domain knowledge. We propose an end-to-end multi-task deep clustering framework to solve the problem. Taking the unlabeled multi-dimensional sensing signals as input, we firstly apply a CNN-BiLSTM autoencoder to form a compressed latent feature representation. Then we apply a K-means clustering algorithm based on the extracted features to partition the dataset into different groups, which produces pseudo labels for the instances. We further train a deep neural network (DNN) with the latent features and pseudo labels for activity recognition. The tasks of feature representation, clustering, and classification are integrated into a uniform multi-task learning framework and optimized jointly to achieve unsupervised activity classification. We conduct extensive experiments based on three public datasets. It is shown that the proposed approach outperforms shallow unsupervised learning approaches, and it performs close to the state-of-the-art supervised approaches by fine-tuning with a small number of labeled data. The proposed approach significantly reduces the cost of human-based data annotation and narrows down the gap between unsupervised and supervised human activity recognition.

[1]  Thomas Plötz,et al.  Ensembles of Deep LSTM Learners for Activity Recognition using Wearables , 2017, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[2]  Theodora Chaspari,et al.  Population-specific Detection of Couples' Interpersonal Conflict using Multi-task Learning , 2018, ICMI.

[3]  Qiang Liu,et al.  A Survey of Clustering With Deep Learning: From the Perspective of Network Architecture , 2018, IEEE Access.

[4]  Bandar Saleh Mouhammed ِAlmaslukh,et al.  An effective deep autoencoder approach for online smartphone-based human activity recognition , 2017 .

[5]  Fernando De la Torre,et al.  Generalized time warping for multi-modal alignment of human motion , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[6]  Aren Jansen,et al.  A segmental framework for fully-unsupervised large-vocabulary speech recognition , 2016, Comput. Speech Lang..

[7]  Ali Farhadi,et al.  Unsupervised Deep Embedding for Clustering Analysis , 2015, ICML.

[8]  Takuya Maekawa,et al.  Unsupervised Factory Activity Recognition with Wearable Sensors Using Process Instruction Information , 2019, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[9]  Ming Zeng,et al.  Understanding and improving recurrent networks for human activity recognition by continuous attention , 2018, UbiComp.

[10]  Bernt Schiele,et al.  Discovery of activity patterns using topic models , 2008 .

[11]  Xiaoli Li,et al.  Deep Convolutional Neural Networks on Multichannel Time Series for Human Activity Recognition , 2015, IJCAI.

[12]  Miguel A. Labrador,et al.  A Survey on Human Activity Recognition using Wearable Sensors , 2013, IEEE Communications Surveys & Tutorials.

[13]  Mikkel Baun Kjærgaard,et al.  Smart Devices are Different: Assessing and MitigatingMobile Sensing Heterogeneities for Activity Recognition , 2015, SenSys.

[14]  Takeshi Nishida,et al.  Deep recurrent neural network for mobile human activity recognition with high throughput , 2017, Artificial Life and Robotics.

[15]  Geoffrey E. Hinton,et al.  Visualizing Data using t-SNE , 2008 .

[16]  Yingjie Tian,et al.  A Comprehensive Survey of Clustering Algorithms , 2015, Annals of Data Science.

[17]  Bo Yu,et al.  Convolutional Neural Networks for human activity recognition using mobile sensors , 2014, 6th International Conference on Mobile Computing, Applications and Services.

[18]  Hamideh Afsarmanesh,et al.  Semi-supervised self-training for decision tree classifiers , 2017, Int. J. Mach. Learn. Cybern..

[19]  Homa Karimabadi,et al.  Deep Temporal Clustering : Fully Unsupervised Learning of Time-Domain Features , 2018, ArXiv.

[20]  David V. Anderson,et al.  On the role of features in human activity recognition , 2019, UbiComp.

[21]  Ling Bao,et al.  Activity Recognition from User-Annotated Acceleration Data , 2004, Pervasive.

[22]  Chris H. Q. Ding,et al.  Spectral Relaxation for K-means Clustering , 2001, NIPS.

[23]  Wojciech Niemiro,et al.  Clustering approach to the problem of human activity recognition using motion data , 2015, 2015 Federated Conference on Computer Science and Information Systems (FedCSIS).

[24]  Matthijs Douze,et al.  Deep Clustering for Unsupervised Learning of Visual Features , 2018, ECCV.

[25]  Changseok Bae,et al.  Unsupervised learning for human activity recognition using smartphone sensors , 2014, Expert Syst. Appl..

[26]  Marcus Edel,et al.  Binarized-BLSTM-RNN based Human Activity Recognition , 2016, 2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN).

[27]  Thomas Plötz,et al.  Using unlabeled data in a sparse-coding framework for human activity recognition , 2014, Pervasive Mob. Comput..

[28]  Sanjoy Dasgupta,et al.  Incremental Clustering: The Case for Extra Clusters , 2014, NIPS.

[29]  Bo Yang,et al.  Towards K-means-friendly Spaces: Simultaneous Deep Learning and Clustering , 2016, ICML.

[30]  Akane Sano,et al.  Personalized Multitask Learning for Predicting Tomorrow's Mood, Stress, and Health , 2020, IEEE Transactions on Affective Computing.

[31]  Zhaozheng Yin,et al.  Human Activity Recognition Using Wearable Sensors by Deep Convolutional Neural Networks , 2015, ACM Multimedia.

[32]  Daniela Micucci,et al.  UniMiB SHAR: a new dataset for human activity recognition using acceleration data from smartphones , 2016, ArXiv.

[33]  Johan Lukkien,et al.  Multi-task Self-Supervised Learning for Human Activity Detection , 2019, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[34]  Xu Sun,et al.  Large-Scale Personalized Human Activity Recognition Using Online Multitask Learning , 2013, IEEE Transactions on Knowledge and Data Engineering.

[35]  Chunyan Miao,et al.  Sensor-Based Activity Recognition via Learning From Distributions , 2018, AAAI.

[36]  Bernt Schiele,et al.  Weakly Supervised Recognition of Daily Life Activities with Wearable Sensors , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[37]  Peter Andras,et al.  On preserving statistical characteristics of accelerometry data using their empirical cumulative distribution , 2013, ISWC '13.

[38]  Cornelio Yáñez-Márquez,et al.  One-Hot Vector Hybrid Associative Classifier for Medical Data Classification , 2014, PloS one.

[39]  Hassan Ghasemzadeh,et al.  LabelForest: Non-Parametric Semi-Supervised Learning for Activity Recognition , 2019, AAAI.

[40]  Diogo R. Ferreira,et al.  Preprocessing techniques for context recognition from accelerometer data , 2010, Personal and Ubiquitous Computing.

[41]  Wenzhong Li,et al.  PersonalitySensing: A Multi-View Multi-Task Learning Approach for Personality Detection based on Smartphone Usage , 2020, ACM Multimedia.

[42]  Andrea Cavallaro,et al.  Protecting Sensory Data against Sensitive Inferences , 2018, P2DS@EuroSys.

[43]  Irfan A. Essa,et al.  Discovering Characteristic Actions from On-Body Sensor Data , 2006, 2006 10th IEEE International Symposium on Wearable Computers.

[44]  Cecilia Mascolo,et al.  Sequence Multi-task Learning to Forecast Mental Wellbeing from Sparse Self-reported Data , 2019, KDD.

[45]  Patrick Olivier,et al.  Beyond activity recognition: skill assessment from accelerometer data , 2015, UbiComp.

[46]  Daniel Roggen,et al.  Deep convolutional feature transfer across mobile activity recognition domains, sensor modalities and locations , 2016, SEMWEB.

[47]  Dietrich Rebholz-Schuhmann,et al.  Deep learning-based clustering approaches for bioinformatics , 2020, Briefings Bioinform..

[48]  Rich Caruana,et al.  Multitask Learning , 1998, Encyclopedia of Machine Learning and Data Mining.

[49]  Thomas Plötz,et al.  Deep, Convolutional, and Recurrent Models for Human Activity Recognition Using Wearables , 2016, IJCAI.

[50]  Takuya Maekawa,et al.  Toward practical factory activity recognition: unsupervised understanding of repetitive assembly work in a factory , 2016, UbiComp.

[51]  Wenzhong Li,et al.  AttnSense: Multi-level Attention Mechanism For Multimodal Human Activity Recognition , 2019, IJCAI.

[52]  Xinwang Liu,et al.  Adaptive Self-Paced Deep Clustering with Data Augmentation , 2020, IEEE Transactions on Knowledge and Data Engineering.

[53]  Hwee Pink Tan,et al.  Deep Activity Recognition Models with Triaxial Accelerometers , 2015, AAAI Workshop: Artificial Intelligence Applied to Assistive Technologies and Smart Environments.

[54]  Bo Ding,et al.  Unsupervised Feature Learning for Human Activity Recognition Using Smartphone Sensors , 2014, MIKE.

[55]  Thad Starner,et al.  Probabilistic extraction and discovery of fundamental units in dolphin whistles , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[56]  Guilin Chen,et al.  Human Activity Recognition in a Smart Home Environment with Stacked Denoising Autoencoders , 2016, WAIM Workshops.

[57]  Daniel Roggen,et al.  Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition , 2016, Sensors.

[58]  Richard Walker,et al.  PD Disease State Assessment in Naturalistic Environments Using Deep Learning , 2015, AAAI.

[59]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[60]  Patrick Olivier,et al.  Feature Learning for Activity Recognition in Ubiquitous Computing , 2011, IJCAI.

[61]  Lina Yao,et al.  Learning from less for better: semi-supervised activity recognition via shared structure discovery , 2016, UbiComp.

[62]  Hamid R. Rabiee,et al.  HMM based semi-supervised learning for activity recognition , 2011, SAGAware '11.

[63]  Yoshua Bengio,et al.  Deep Learning of Representations: Looking Forward , 2013, SLSP.

[64]  Gerhard Tröster,et al.  Eye Movement Analysis for Activity Recognition Using Electrooculography , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[65]  João Mendes-Moreira,et al.  Human Activity Recognition by Means of Online Semi-supervised Learning , 2016, 2016 17th IEEE International Conference on Mobile Data Management (MDM).

[66]  Seungjin Choi,et al.  Multi-modal Convolutional Neural Networks for Activity Recognition , 2015, 2015 IEEE International Conference on Systems, Man, and Cybernetics.

[67]  S. Mukhopadhyay,et al.  Activity and Anomaly Detection in Smart Home: A Survey , 2016 .

[68]  Xiaohui Peng,et al.  Deep Learning for Sensor-based Activity Recognition: A Survey , 2017, Pattern Recognit. Lett..

[69]  Nils Y. Hammerla,et al.  Large Scale Population Assessment of Physical Activity Using Wrist Worn Accelerometers: The UK Biobank Study , 2017, PloS one.