Standing-Posture Recognition in Human–Robot Collaboration Based on Deep Learning and the Dempster–Shafer Evidence Theory

During human–robot collaborations (HRC), robot systems must accurately perceive the actions and intentions of humans. The present study proposes the classification of standing postures from standing-pressure images, by which a robot system can predict the intended actions of human workers in an HRC environment. To this end, it explores deep learning based on standing-posture recognition and a multi-recognition algorithm fusion method for HRC. To acquire the pressure-distribution data, ten experimental participants stood on a pressure-sensing floor embedded with thin-film pressure sensors. The pressure data of nine standing postures were obtained from each participant. The human standing postures were discriminated by seven classification algorithms. The results of the best three algorithms were fused using the Dempster–Shafer evidence theory to improve the accuracy and robustness. In a cross-validation test, the best method achieved an average accuracy of 99.96%. The convolutional neural network classifier and data-fusion algorithm can feasibly classify the standing postures of human workers.

[1]  Antonio Bicchi,et al.  Low-cost, fast and accurate reconstruction of robotic and human postures via IMU measurements , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[2]  André Crosnier,et al.  Collaborative manufacturing with physical human–robot interaction , 2016 .

[3]  Mariano Gamboa-Zúñiga,et al.  Posture classification of lying down human bodies based on pressure sensors array , 2014, 2014 International Joint Conference on Neural Networks (IJCNN).

[4]  Ruben Vera-Rodriguez,et al.  Spatial footstep recognition by convolutional neural networks for biometric applications , 2016, 2016 IEEE SENSORS.

[5]  Yu Sun,et al.  On bed posture recognition with pressure sensor array system , 2016, 2016 IEEE SENSORS.

[6]  Paul Lukowicz,et al.  Smart-surface: Large scale textile pressure sensors arrays for activity recognition , 2016, Pervasive Mob. Comput..

[7]  Thi-Lan Le,et al.  Human posture recognition using human skeleton provided by Kinect , 2013, 2013 International Conference on Computing, Management and Telecommunications (ComManTel).

[8]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[9]  Nikolaos Doulamis,et al.  Deep Learning for Computer Vision: A Brief Review , 2018, Comput. Intell. Neurosci..

[10]  Young-One Cho,et al.  Footprint Recognition Using Footprint Energy Image , 2012 .

[11]  Umar S. Khan,et al.  Human posture classification using skeleton information , 2018, 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET).

[12]  Songmin Jia,et al.  Weighted score-level feature fusion based on Dempster–Shafer evidence theory for action recognition , 2018 .

[13]  Klaus-Dieter Thoben,et al.  Safety Requirements in Collaborative Human-Robot Cyber-Physical System , 2016, LDIC.

[14]  Ehsan Noohi,et al.  A Model for Human–Human Collaborative Object Manipulation and Its Application to Human–Robot Interaction , 2016, IEEE Transactions on Robotics.

[15]  Paul Lukowicz,et al.  The carpet knows: Identifying people in a smart environment from a single step , 2017, 2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops).

[16]  Yong Du,et al.  Representation Learning of Temporal Dynamics for Skeleton-Based Action Recognition , 2016, IEEE Transactions on Image Processing.

[17]  Boreom Lee,et al.  Sitting Posture Monitoring System Based on a Low-Cost Load Cell Using Machine Learning , 2018, Sensors.

[18]  Fakhita Regragui,et al.  EEG efficient classification of imagined right and left hand movement using RBF kernel SVM and the joint CWT_PCA , 2017, AI & SOCIETY.

[19]  Gang Qian,et al.  Footprint tracking and recognition using a pressure sensing floor , 2009, 2009 16th IEEE International Conference on Image Processing (ICIP).

[20]  Yang Dan,et al.  A robust D-S fusion algorithm for multi-target multi-sensor with higher reliability , 2019, Inf. Fusion.

[21]  Sotiris Makris,et al.  Seamless human robot collaborative assembly – An automotive case study , 2018, Mechatronics.

[22]  Jun Kinugawa,et al.  作業者の運動情報を利用した作業進度の推定とそれに基づく人協調ロボットのための作業支援スケジューリング;作業者の運動情報を利用した作業進度の推定とそれに基づく人協調ロボットのための作業支援スケジューリング;Task Scheduling for Assembly Task Co-worker Robot Based on Estimation of Work Progress Using Worker's Kinetic Information , 2017 .

[23]  Jun Kinugawa,et al.  Adaptive Task Scheduling for an Assembly Task Coworker Robot Based on Incremental Learning of Human's Motion Patterns , 2017, IEEE Robotics and Automation Letters.

[24]  Muhammad Zarlis,et al.  Comparison of Naive Bayes and Decision Tree on Feature Selection Using Genetic Algorithm for Classification Problem , 2018 .

[25]  Roman Gerbers,et al.  Simulation Platform to Investigate Safe Operation of Human-Robot Collaboration Systems , 2016 .

[26]  Christian Szegedy,et al.  DeepPose: Human Pose Estimation via Deep Neural Networks , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[27]  Franz Dietrich,et al.  A Machine Learning-Enhanced Digital Twin Approach for Human-Robot-Collaboration , 2018 .

[28]  Raquel Frizera Vassallo,et al.  Human–Robot Interaction and Cooperation Through People Detection and Gesture Recognition , 2013, Journal of Control, Automation and Electrical Systems.

[29]  Paul Lukowicz,et al.  Transforming sensor data to the image domain for deep learning — An application to footstep detection , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).

[30]  Cordelia Schmid,et al.  MoCap-guided Data Augmentation for 3D Pose Estimation in the Wild , 2016, NIPS.

[31]  Amy Loutfi,et al.  A review of unsupervised feature learning and deep learning for time-series modeling , 2014, Pattern Recognit. Lett..

[32]  Lihui Wang,et al.  Research Letters Vision-guided active collision avoidance for human-robot collaborations , 2013 .

[33]  Lihui Wang,et al.  Human-robot collaborative assembly in cyber-physical production: Classification framework and implementation , 2017 .

[34]  Nasser Kehtarnavaz,et al.  A survey of depth and inertial sensor fusion for human action recognition , 2015, Multimedia Tools and Applications.

[35]  Yong Kim,et al.  Classification of Children’s Sitting Postures Using Machine Learning Algorithms , 2018, Applied Sciences.

[36]  Min Li,et al.  Research on High-Precision, Low Cost Piezoresistive MEMS-Array Pressure Transmitters Based on Genetic Wavelet Neural Networks for Meteorological Measurements , 2015, Micromachines.

[37]  Wenjun Xu,et al.  Sensorless and adaptive admittance control of industrial robot in physical human−robot interaction , 2018, Robotics and Computer-Integrated Manufacturing.

[38]  Gang Qian,et al.  Gesture recognition using video and floor pressure data , 2012, 2012 19th IEEE International Conference on Image Processing.

[39]  Liang Zhang,et al.  A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor , 2015, Sensors.

[40]  Cédric Cochrane,et al.  Low-Cost Intelligent Carpet System for Footstep Detection , 2017, IEEE Sensors Journal.