Training Classifiers with Shadow Features for Sensor-Based Human Activity Recognition

In this paper, a novel training/testing process for building/using a classification model based on human activity recognition (HAR) is proposed. Traditionally, HAR has been accomplished by a classifier that learns the activities of a person by training with skeletal data obtained from a motion sensor, such as Microsoft Kinect. These skeletal data are the spatial coordinates (x, y, z) of different parts of the human body. The numeric information forms time series, temporal records of movement sequences that can be used for training a classifier. In addition to the spatial features that describe current positions in the skeletal data, new features called ‘shadow features’ are used to improve the supervised learning efficacy of the classifier. Shadow features are inferred from the dynamics of body movements, and thereby modelling the underlying momentum of the performed activities. They provide extra dimensions of information for characterising activities in the classification process, and thereby significantly improve the classification accuracy. Two cases of HAR are tested using a classification model trained with shadow features: one is by using wearable sensor and the other is by a Kinect-based remote sensor. Our experiments can demonstrate the advantages of the new method, which will have an impact on human activity detection research.

[1]  Rama Chellappa,et al.  Ieee Transactions on Pattern Analysis and Machine Intelligence 1 Matching Shape Sequences in Video with Applications in Human Movement Analysis. Ieee Transactions on Pattern Analysis and Machine Intelligence 2 , 2022 .

[2]  Ronen Basri,et al.  Actions as space-time shapes , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[3]  Jenq-Neng Hwang,et al.  An Effective 3D Geometric Relational Feature Descriptor for Human Action Recognition , 2012, 2012 IEEE RIVF International Conference on Computing & Communication Technologies, Research, Innovation, and Vision for the Future.

[4]  Ahmet Turan Özdemir,et al.  An Analysis on Sensor Locations of the Human Body for Wearable Fall Detection Devices: Principles and Practice , 2016, Sensors.

[5]  H. S. Wolff,et al.  iRun: Horizontal and Vertical Shape of a Region-Based Graph Compression , 2022, Sensors.

[6]  Bill Triggs,et al.  Histograms of oriented gradients for human detection , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[7]  Young-Sik Jeong,et al.  Intuitive NUI for Controlling Virtual Objects Based on Hand Movements , 2014 .

[8]  Meinard Müller,et al.  Efficient content-based retrieval of motion capture data , 2005, SIGGRAPH '05.

[9]  Somayeh Danafar,et al.  Action Recognition for Surveillance Applications Using Optic Flow and SVM , 2007, ACCV.

[10]  Wei Song,et al.  Hand Gesture Detection and Tracking Methods Based on Background Subtraction , 2014 .

[11]  Serge J. Belongie,et al.  Behavior recognition via sparse spatio-temporal features , 2005, 2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance.

[12]  Mi Zhang,et al.  A feature selection-based framework for human activity recognition using wearable multimodal sensors , 2011, BODYNETS.

[13]  Sung-Bae Cho,et al.  A hybrid approach to human posture classification during TV watching , 2016 .

[14]  Ales Procházka,et al.  Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis , 2016, Sensors.

[15]  Eli Shechtman,et al.  Space-time behavior based correlation , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[16]  Yongwha Chung,et al.  Automatic Recognition of Aggressive Behavior in Pigs Using a Kinect Depth Sensor , 2016, Sensors.

[17]  Paolo Dario,et al.  Recognition of Daily Gestures with Wearable Inertial Rings and Bracelets , 2016, Sensors.

[18]  Luis A. Trejo,et al.  Ensemble of One-Class Classifiers for Personal Risk Detection Based on Wearable Sensor Data , 2016, Sensors.

[19]  Martial Hebert,et al.  Spatio-temporal Shape and Flow Correlation for Action Recognition , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[20]  Rajiv Kapoor,et al.  Human Activity Recognition Using Gabor Wavelet Transform and Ridgelet Transform , 2015 .

[21]  Annette Witt,et al.  Quantification of Long-Range Persistence in Geophysical Time Series: Conventional and Benchmark-Based Improvement Techniques , 2013, Surveys in Geophysics.

[22]  Ling Bao,et al.  Activity Recognition from User-Annotated Acceleration Data , 2004, Pervasive.

[23]  Andrei V. Gurtov,et al.  Secure and Efficient Reactive Video Surveillance for Patient Monitoring , 2016, Sensors.

[24]  Suman K. Mitra,et al.  Human Action Recognition Using DFT , 2011, 2011 Third National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics.

[25]  James J. Little,et al.  Simultaneous Tracking and Action Recognition using the PCA-HOG Descriptor , 2006, The 3rd Canadian Conference on Computer and Robot Vision (CRV'06).

[26]  Alex Pentland,et al.  Invariant features for 3-D gesture recognition , 1996, Proceedings of the Second International Conference on Automatic Face and Gesture Recognition.

[27]  Ankur Agarwal,et al.  Recovering 3D human pose from monocular images , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.