Recognizing Ping-Pong Motions Using Inertial Data Based on Machine Learning Classification Algorithms

With the development of Internet of Things (IoT) technology and various sensing technologies, some newer ways of perceiving people and the environment have emerged. Commercial wearable sensing devices integrate a variety of sensors that can play a significant role in motion capture and behavioral analysis. This paper proposes a solution for recognizing human motion in ping-pong using a commercial smart watch. We developed a data acquisition system based on the IoT architecture to obtain data relating to areas such as acceleration, angular velocity, and magnetic induction of the watch. Based on the features of the extracted data, experiments were performed using major machine learning classification algorithms including k-nearest neighbor, support vector machine, Naive Bayes, logistic regression, decision tree, and random forest. The results show that the random forest has the best performance, reaching a recognition rate of 97.80%. In addition, we designed a simple convolutional neural network to compare its performance in this problem. The network consists of two convolutional layers, two pooling layers, and two fully connected layers, and it uses data with no extracted features. The results show that it achieves an accuracy of 87.55%. This research can provide training assistance for amateur ping-pong players.

[1]  Lawrence D. Jackel,et al.  Backpropagation Applied to Handwritten Zip Code Recognition , 1989, Neural Computation.

[2]  Iztok Fister,et al.  Sensors and Functionalities of Non-Invasive Wrist-Wearable Devices: A Review , 2018, Sensors.

[3]  Hamdi Amroun,et al.  Who Used My Smart Object? A Flexible Approach for the Recognition of Users , 2018, IEEE Access.

[4]  Alida Wiersma Statistical learning methods for environmental DNA , 2019 .

[5]  Wang Zhao Digital 3D Trampoline Simulating System:VHTrampoline , 2007 .

[6]  Qiang Chen,et al.  Network In Network , 2013, ICLR.

[7]  Graham W. Taylor,et al.  Deconvolutional networks , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[8]  Hassan Artail,et al.  Integrating pressure and accelerometer sensing for improved activity recognition on smartphones , 2013, 2013 Third International Conference on Communications and Information Technology (ICCIT).

[9]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Networks , 2013, ECCV.

[10]  Andrew Zisserman,et al.  Spatial Transformer Networks , 2015, NIPS.

[11]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[12]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[13]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[14]  Ye Tao,et al.  An Improved Activity Recognition Method Based on Smart Watch Data , 2017, 22017 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC).

[15]  Shuicheng Yan,et al.  Multi-loss Regularized Deep Neural Network , 2016, IEEE Transactions on Circuits and Systems for Video Technology.

[16]  Ting Liu,et al.  Recent advances in convolutional neural networks , 2015, Pattern Recognit..

[17]  L. Benini,et al.  Activity recognition from on-body sensors by classifier fusion: sensor scalability and robustness , 2007, 2007 3rd International Conference on Intelligent Sensors, Sensor Networks and Information.

[18]  Yufei Chen,et al.  Performance Analysis of Smartphone-Sensor Behavior for Human Activity Recognition , 2017, IEEE Access.

[19]  Jeen-Shing Wang,et al.  Using acceleration measurements for activity recognition: An effective learning algorithm for constructing neural classifiers , 2008, Pattern Recognit. Lett..

[20]  Tara N. Sainath,et al.  Deep convolutional neural networks for LVCSR , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[21]  Peter E. Hart,et al.  Nearest neighbor pattern classification , 1967, IEEE Trans. Inf. Theory.

[22]  Ah Chung Tsoi,et al.  Face recognition: a convolutional neural-network approach , 1997, IEEE Trans. Neural Networks.

[23]  Aleksandar Pavic,et al.  Measurement of Walking Ground Reactions in Real-Life Environments: A Systematic Review of Techniques and Technologies , 2017, Sensors.

[24]  Ig-Jae Kim,et al.  Mobile health monitoring system based on activity recognition using accelerometer , 2010, Simul. Model. Pract. Theory.

[25]  Yann LeCun,et al.  Stacked What-Where Auto-encoders , 2015, ArXiv.

[26]  Johnny Chung Lee,et al.  Hacking the Nintendo Wii Remote , 2008, IEEE Pervasive Computing.

[27]  Jian Huang,et al.  A Wearable Activity Recognition Device Using Air-Pressure and IMU Sensors , 2019, IEEE Access.

[28]  Claus Nebauer,et al.  Evaluation of convolutional neural networks for visual recognition , 1998, IEEE Trans. Neural Networks.

[29]  Daniel Olgu ´ õn,et al.  Human Activity Recognition: Accuracy across Common Locations for Wearable Sensors , 2006 .

[30]  Geoffrey E. Hinton,et al.  Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.

[31]  Andrea Ancillao,et al.  Indirect Measurement of Ground Reaction Forces and Moments by Means of Wearable Inertial Sensors: A Systematic Review , 2018, Sensors.

[32]  Junqing Xie,et al.  Evaluating the Validity of Current Mainstream Wearable Devices in Fitness Tracking Under Various Physical Activities: Comparative Study , 2018, JMIR mHealth and uHealth.

[33]  Jürgen Schmidhuber,et al.  Deep learning in neural networks: An overview , 2014, Neural Networks.