Survey and Planning of High-Payload Human-Robot Collaboration: Multi-modal Communication Based on Sensor Fusion

Human-Robot Collaboration (HRC) has gained increased attention with the widespread commissioning and usage of collaborative robots. However, recent studies show that the fenceless collaborative robots are not as harmless as they look like. In addition, collaborative robots usually have a very limited payload (up to 12 kg), which is not satisfactory for most of the industrial applications. To use high-payload industrial robots in HRC, today’s safety systems has only one option, limiting speeds of robot motion execution and redundant systems for supervision of forces. The reduction of execution speed, reduces efficiency, which limits more widespread of automation. To overcome this limitation, in this paper, we propose novel sensor fusion of different safety related sensors and combine these in a way that they ensure safety, while the human operator can focus on task execution and communicate with the system in a natural way. Different communication channels are explored (multi-modal) and demonstration scenarios are presented.

[1]  Sandor S. Szabo,et al.  A Testbed for Evaluation of Speed and Separation Monitoring in a Human Robot Collaborative Environment , 2012 .

[2]  Yaser Sheikh,et al.  Hand Keypoint Detection in Single Images Using Multiview Bootstrapping , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Jan Reimann,et al.  The Intelligent Factory Space – A Concept for Observing, Learning and Communicating in the Digitalized Factory , 2019, IEEE Access.

[4]  Peter Korondi,et al.  A novel application of the 3D VirCA environment: Modeling a standard ethological test of dog-human interactions , 2012 .

[5]  Alin Albu-Schäffer,et al.  Injury evaluation of human-robot impacts , 2008, 2008 IEEE International Conference on Robotics and Automation.

[6]  Yaser Sheikh,et al.  OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields , 2018, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  Karon E. MacLean,et al.  Gestures for industry Intuitive human-robot communication from human observation , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[8]  Fulvio Mastrogiovanni,et al.  A real-time data acquisition and processing framework for large-scale robot skin , 2015, Robotics Auton. Syst..

[9]  Hideki Hashimoto,et al.  3D Internet for cognitive info-communication , 2009 .

[10]  Hideki Hashimoto,et al.  General guiding model for mobile robots and its complexity reduced neuro-fuzzy approximation , 2000, Ninth IEEE International Conference on Fuzzy Systems. FUZZ- IEEE 2000 (Cat. No.00CH37063).

[11]  Jose Saenz,et al.  Safeguarding Collaborative Mobile Manipulators - Evaluation of the VALERI Workspace Monitoring System , 2017 .

[12]  Mehrtash Tafazzoli Harandi,et al.  Going deeper into action recognition: A survey , 2016, Image Vis. Comput..

[13]  Lihui Wang,et al.  Gesture recognition for human-robot collaboration: A review , 2017, International Journal of Industrial Ergonomics.

[14]  Beibei Shu,et al.  Human-Robot Collaboration: Task Sharing Through Virtual Reality , 2018, IECON 2018 - 44th Annual Conference of the IEEE Industrial Electronics Society.