Accurate Physical Activity Recognition using Multidimensional Features and Markov Model for Smart Health Fitness

Recent developments in sensor technologies enable physical activity recognition (PAR) as an essential tool for smart health monitoring and for fitness exercises. For efficient PAR, model representation and training are significant factors contributing to the ultimate success of recognition systems because model representation and accurate detection of body parts and physical activities cannot be distinguished if the system is not well trained. This paper provides a unified framework that explores multidimensional features with the help of a fusion of body part models and quadratic discriminant analysis which uses these features for markerless human pose estimation. Multilevel features are extracted as displacement parameters to work as spatiotemporal properties. These properties represent the respective positions of the body parts with respect to time. Finally, these features are processed by a maximum entropy Markov model as a recognition engine based on transition and emission probability values. Experimental results demonstrate that the proposed model produces more accurate results compared to the state-of-the-art methods for both body part detection and for physical activity recognition. The accuracy of the proposed method for body part detection is 90.91% on a University of Central Florida’s (UCF) sports action dataset and, for activity recognition on a UCF YouTube action dataset and an IM-DailyRGBEvents dataset, accuracy is 89.09% and 88.26% respectively.

[1]  Wei Zhao,et al.  A motifs-based Maximum Entropy Markov Model for realtime reliability prediction in System of Systems , 2019, J. Syst. Softw..

[2]  Seba Susan,et al.  New shape descriptor in the context of edge continuity , 2019, CAAI Trans. Intell. Technol..

[3]  Duoqian Miao,et al.  Influence of kernel clustering on an RBFN , 2019, CAAI Trans. Intell. Technol..

[4]  Shulin Yang,et al.  Action recognition new framework with robust 3D-TCCHOGAC and 3D-HOOFGAC , 2016, Multimedia Tools and Applications.

[5]  Feng Guo,et al.  Human action recognition based on HOIRM feature fusion and AP clustering BOW , 2019, PloS one.

[6]  Xu Yong,et al.  Three-stage network for age estimation , 2019 .

[7]  Med Salim Bouhlel,et al.  A new hybrid deep learning model for human action recognition , 2020, J. King Saud Univ. Comput. Inf. Sci..

[8]  Yasushi Makihara,et al.  Similar gait action recognition using an inertial sensor , 2015, Pattern Recognit..

[9]  Zhenyang Wu,et al.  Realistic human action recognition by Fast HOG3D and self-organization feature map , 2014, Machine Vision and Applications.

[10]  Yang Yi,et al.  Human action recognition with salient trajectories and multiple kernel learning , 2017, Multimedia Tools and Applications.

[11]  Jürgen Weber,et al.  Analytical analysis of single-stage pressure relief valves , 2019, International Journal of Hydromechatronics.

[12]  Nasser Kehtarnavaz,et al.  Data Augmentation in Deep Learning-Based Fusion of Depth and Inertial Sensing for Action Recognition , 2019, IEEE Sensors Letters.

[13]  Weidong Min,et al.  A New Approach to Fall Detection Based on the Human Torso Motion Model , 2017 .

[14]  Tae-Seong Kim,et al.  Depth video-based human activity recognition system using translation and scaling invariant features for life logging at smart home , 2012, IEEE Transactions on Consumer Electronics.