Hierarchical Signal Segmentation and Classification for Accurate Activity Recognition

The objective of this work is to determine various modes of locomotion and in particular identify the transition time from one mode of locomotion to another as accurately as possible. Recognizing human daily activities, specifically modes of locomotion and transportation, with smartphones provides important contextual insight that can enhance the effectiveness of many mobile applications. In particular, determining any transition from one mode of operation to another empowers applications to react in a timely manner to this contextual insight. Previous studies on activity recognition have utilized various fixed window sizes for signal segmentation and feature extraction. While extracting features from larger window size provides richer information to classifiers, it increases misclassification rate when a transition occurs in the middle of windows as the classifier assigns only one label to all samples within a window. This paper proposes a hierarchical signal segmentation approach to deal with the problem of fixed-size windows. This process begins by extracting a rich set of features from large segments of signal and predicting the activity. Segments that are suspected to contain more than one activity are then detected and split into smaller subwindows in order to fine-tune the label assignment. The search space of the classifier is narrowed down based on the initial estimation of the activity, and labels are assigned to each sub-window. Experimental results show that the proposed method improves the F1-score by 2% compared to using fixed windows for data segmentation. The paper presents the techniques employed in our team's (The Drifters) submission to the SHL recognition challenge.

[1]  Daniel Roggen,et al.  Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition , 2016, Sensors.

[2]  Roozbeh Jafari,et al.  Orientation Independent Activity/Gesture Recognition Using Wearable Motion Sensors , 2019, IEEE Internet of Things Journal.

[3]  Emiliano Miluzzo,et al.  A survey of mobile phone sensing , 2010, IEEE Communications Magazine.

[4]  Zhang Sheng,et al.  An adaptive time window method for human activity recognition , 2015, 2015 IEEE 28th Canadian Conference on Electrical and Computer Engineering (CCECE).

[5]  Lin Wang,et al.  The University of Sussex-Huawei Locomotion and Transportation Dataset for Multimodal Analytics With Mobile Devices , 2018, IEEE Access.

[6]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[7]  Jon Atli Benediktsson,et al.  Neural Network Approaches Versus Statistical Methods in Classification of Multisource Remote Sensing Data , 1989, 12th Canadian Symposium on Remote Sensing Geoscience and Remote Sensing Symposium,.

[8]  Tomaso A. Poggio,et al.  When and Why Are Deep Networks Better Than Shallow Ones? , 2017, AAAI.

[9]  Roozbeh Jafari,et al.  A Wearable System for Recognizing American Sign Language in Real-Time Using IMU and Surface EMG Sensors , 2016, IEEE Journal of Biomedical and Health Informatics.

[10]  Héctor Pomares,et al.  Evaluating the effects of signal segmentation on activity recognition , 2014, IWBBIO.

[11]  Héctor Pomares,et al.  Window Size Impact in Human Activity Recognition , 2014, Sensors.

[12]  Lin Wang,et al.  Summary of the Sussex-Huawei Locomotion-Transportation Recognition Challenge , 2018, UbiComp/ISWC Adjunct.

[13]  Kenichi Yamazaki,et al.  Gait analyzer based on a cell phone with a single three-axis accelerometer , 2006, Mobile HCI.

[14]  Sasu Tarkoma,et al.  Accelerometer-based transportation mode detection on smartphones , 2013, SenSys '13.

[15]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[16]  Zoran A. Salcic,et al.  Adaptive sliding window segmentation for physical activity recognition using a single tri-axial accelerometer , 2017, Pervasive Mob. Comput..