Mediating Human-Robot Collaboration through Mixed Reality Cues

This work presents a communication paradigm, using a context-aware mixed reality approach, for instructing human workers when collaborating with robots. The main objective of this approach is to utilize the physical work environment as a canvas to communicate task-related instructions and robot intentions in the form of visual cues. A vision-based object tracking algorithm is used to precisely determine the pose and state of physical objects in and around the workspace. A projection mapping technique is used to overlay visual cues on tracked objects and the workspace. Simultaneous tracking and projection onto objects enables the system to provide just-in-time instructions for carrying out a procedural task. Additionally, the system can also inform and warn humans about the intentions of the robot and safety of the workspace. It was hypothesized that using this system for executing a human-robot collaborative task will improve the overall performance of the team and provide a positive experience to the human partner. To test this hypothesis, an experiment involving human subjects was conducted and the performance (both objective and subjective) of the presented system was compared with a conventional method based on printed instructions. It was found that projecting visual cues enabled human subjects to collaborate more effectively with the robot and resulted in higher efficiency in completing the task.

[1]  Klaus Schilling,et al.  A Spatial Augmented Reality system for intuitive display of robotic data , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[2]  Ross A. Knepper,et al.  Asking for Help Using Inverse Semantics , 2014, Robotics: Science and Systems.

[3]  Nancy M. Amato,et al.  A Roadmap for US Robotics - From Internet to Robotics 2020 Edition , 2021, Found. Trends Robotics.

[4]  Stephanie Rosenthal,et al.  Dynamic generation and refinement of robot verbalization , 2016, 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[5]  Katie Salen,et al.  Rules of play: game design fundamentals , 2003 .

[6]  Rachid Alami,et al.  Planning Safe and Legible Hand-over Motions for Human-Robot Interaction , 2010 .

[7]  Gabriel Taubin,et al.  Simple, Accurate, and Robust Projector-Camera Calibration , 2012, 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission.

[8]  Shin Sato,et al.  A human-robot interface using an interactive hand pointer that projects a mark in the real work space , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[9]  K. A. Ericsson,et al.  Verbal reports as data. , 1980 .

[10]  Henrik I. Christensen,et al.  Real-time 3D model-based tracking using edge and keypoint features for robotic manipulation , 2010, 2010 IEEE International Conference on Robotics and Automation.

[11]  Manuel Lopes,et al.  Facilitating intention prediction for humans by optimizing robot motions , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[12]  Kentaro Ishii,et al.  Designing Laser Gesture Interface for Robot Control , 2009, INTERACT.

[13]  Christian Heath,et al.  IEEE International Symposium on Robot and Human Interactive Communication , 2009 .

[14]  Ravi Teja Chadalavada,et al.  That's on my mind! robot to human intention communication through on-board projection on shared floor space , 2015, 2015 European Conference on Mobile Robots (ECMR).

[15]  J. Bargh,et al.  Automaticity of social behavior: direct effects of trait construct and stereotype-activation on action. , 1996, Journal of personality and social psychology.

[16]  Atsushi Watanabe,et al.  Communicating robotic navigational intentions , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[17]  Jonathan P. How,et al.  Measurable Augmented Reality for Prototyping Cyberphysical Systems: A Robotics Platform to Aid the Hardware Prototyping and Performance Testing of Algorithms , 2016, IEEE Control Systems.

[18]  Nicholas R. Gans,et al.  A Multi-view camera-projector system for object detection and robot-human feedback , 2013, 2013 IEEE International Conference on Robotics and Automation.

[19]  Kentaro Ishii,et al.  Blinkbot: look at, blink and move , 2010, UIST '10.

[20]  Guy Hoffman,et al.  Evaluating Fluency in Human–Robot Collaboration , 2019, IEEE Transactions on Human-Machine Systems.

[21]  John F. Canny,et al.  A Computational Approach to Edge Detection , 1986, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[22]  Jonathan P. How,et al.  MAR-CPS: Measurable Augmented Reality for Prototyping Cyber-Physical Systems , 2015 .

[23]  Siddhartha S. Srinivasa,et al.  Effects of Robot Motion on Human-Robot Collaboration , 2015, 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[24]  Thomas B. Moeslund,et al.  Projecting robot intentions into human environments , 2016, 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[25]  Klaus Krippendorff,et al.  Content Analysis: An Introduction to Its Methodology , 1980 .