Human activity recognition is widely used in many fields, such as the monitoring of smart homes, fire detecting and rescuing, hospital patient management, etc. Acoustic waves are an effective method for human activity recognition. In traditional ways, one or a few ultrasonic sensors are used to receive signals, which require many feature quantities of extraction from the received data to improve recognition accuracy. In this study, we propose an approach for human activity recognition based on a two-dimensional acoustic array and convolutional neural networks. A single feature quantity is utilized to characterize the sound of human activities and identify those activities. The results show that the total accuracy of the activities is 97.5% for time-domain data and 100% for frequency-domain data. The influence of the array size on recognition accuracy is discussed, and the accuracy of the proposed approach is compared with traditional recognition approaches such as k-nearest neighbor and support vector machines where it outperformed them.Human activity recognition is widely used in many fields, such as the monitoring of smart homes, fire detecting and rescuing, hospital patient management, etc. Acoustic waves are an effective method for human activity recognition. In traditional ways, one or a few ultrasonic sensors are used to receive signals, which require many feature quantities of extraction from the received data to improve recognition accuracy. In this study, we propose an approach for human activity recognition based on a two-dimensional acoustic array and convolutional neural networks. A single feature quantity is utilized to characterize the sound of human activities and identify those activities. The results show that the total accuracy of the activities is 97.5% for time-domain data and 100% for frequency-domain data. The influence of the array size on recognition accuracy is discussed, and the accuracy of the proposed approach is compared with traditional recognition approaches such as k-nearest neighbor and support vector mac...
[1]
Kimiaki Shirahama,et al.
A general framework for sensor-based human activity recognition
,
2018,
Comput. Biol. Medicine.
[2]
Sergio Escalera,et al.
RGB-D-based Human Motion Recognition with Deep Learning: A Survey
,
2017,
Comput. Vis. Image Underst..
[3]
Álvaro Herrero,et al.
Features and models for human activity recognition
,
2015,
Neurocomputing.
[4]
Jae-Young Pyun,et al.
Deep Recurrent Neural Networks for Human Activity Recognition
,
2017,
Sensors.
[5]
Yan Song,et al.
Robust Sound Event Classification Using Deep Neural Networks
,
2015,
IEEE/ACM Transactions on Audio, Speech, and Language Processing.
[6]
Gang Liu,et al.
Smart electronic skin having gesture recognition function by LSTM neural network
,
2018,
Applied Physics Letters.
[7]
Juan José Pantrigo,et al.
Convolutional Neural Networks and Long Short-Term Memory for skeleton-based human activity and hand gesture recognition
,
2018,
Pattern Recognit..
[8]
Juan Song,et al.
An Online Continuous Human Action Recognition Algorithm Based on the Kinect Sensor
,
2016,
Sensors.
[9]
Ahmad Almogren,et al.
A robust human activity recognition system using smartphone sensors and deep learning
,
2018,
Future Gener. Comput. Syst..
[10]
Ian McLoughlin,et al.
Continuous robust sound event classification using time-frequency features and deep learning
,
2017,
PloS one.