Communicating Inferred Goals With Passive Augmented Reality and Active Haptic Feedback

Robots learn as they interact with humans. Consider a human teleoperating an assistive robot arm: as the human guides and corrects the arm's motion, the robot gathers information about the human's desired task. But how does the human know what their robot has inferred? Today's approaches often focus on conveying intent: for instance, using legible motions or gestures to indicate what the robot is planning. However, closing the loop on robot inference requires more than just revealing the robot's current policy: the robot should also display the alternatives it thinks are likely, and prompt the human teacher when additional guidance is necessary. In this letter we propose a multimodal approach for communicating robot inference that combines both passive and active feedback. Specifically, we leverage information-rich augmented reality to passively visualize what the robot has inferred, and attention-grabbing haptic wristbands to actively prompt and direct the human's teaching. We apply our system to shared autonomy tasks where the robot must infer the human's goal in real-time. Within this context, we integrate passive and active modalities into a single algorithmic framework that determines when and which type of feedback to provide. Combining both passive and active feedback experimentally outperforms single modality baselines; during an in-person user study, we demonstrate that our integrated approach increases how efficiently humans teach the robot while simultaneously decreasing the amount of time humans spend interacting with the robot. Videos here: https://youtu.be/swq_u4iIP-g

[1]  Francisco Herrera,et al.  Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI , 2020, Inf. Fusion.

[2]  Allison M. Okamura,et al.  Efficient and Trustworthy Social Navigation via Explicit and Implicit Robot–Human Communication , 2018, IEEE Transactions on Robotics.

[3]  Claudio Pacchierotti,et al.  Design and Evaluation of a Wearable Haptic Device for Skin Stretch, Pressure, and Vibrotactile Stimuli , 2018, IEEE Robotics and Automation Letters.

[4]  Siddhartha S. Srinivasa,et al.  Predicting User Intent Through Eye Gaze for Shared Autonomy , 2016, AAAI Fall Symposia.

[5]  Tom Drummond,et al.  Visualizing Robot Intent for Object Handovers with Augmented Reality , 2021, ArXiv.

[6]  Anca D. Dragan,et al.  Enabling robots to communicate their objectives , 2017, Autonomous Robots.

[7]  Daniel Szafir,et al.  Visualization of Intended Assistance for Acceptance of Shared Control , 2020, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[8]  Shiqi Zhang,et al.  ARROCH: Augmented Reality for Robots Collaborating with a Human , 2021, 2021 IEEE International Conference on Robotics and Automation (ICRA).

[9]  Akansel Cosgun,et al.  Virtual Barriers in Augmented Reality for Safe and Effective Human-Robot Cooperation in Manufacturing , 2021, ArXiv.

[10]  Andrea Lockerd Thomaz,et al.  Generating anticipation in robot motion , 2011, 2011 RO-MAN.

[11]  Stephanie Rosenthal,et al.  Enhancing human understanding of a mobile robot's state and actions using expressive lights , 2016, 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[12]  Aran Sena,et al.  Quantifying teaching behavior in robot learning from demonstration , 2019, Int. J. Robotics Res..

[13]  Ali Israr,et al.  Tasbi: Multisensory Squeeze and Vibrotactile Wrist Haptics for Augmented and Virtual Reality , 2019, 2019 IEEE World Haptics Conference (WHC).

[14]  Maja J. Matarić,et al.  A Survey of Nonverbal Signaling Methods for Non-Humanoid Robots , 2018, Found. Trends Robotics.

[15]  Brenna Argall,et al.  Probabilistic Human Intent Recognition for Shared Autonomy in Assistive Robotics , 2019, ACM transactions on human-robot interaction.

[16]  Eyal Amir,et al.  Bayesian Inverse Reinforcement Learning , 2007, IJCAI.

[17]  Anca D. Dragan,et al.  Establishing Appropriate Trust via Critical States , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[18]  Dorsa Sadigh,et al.  Learning reward functions from diverse sources of human feedback: Optimally integrating demonstrations and preferences , 2020, Int. J. Robotics Res..

[19]  Muhammad Faizan,et al.  Creating a Shared Reality with Robots , 2019, 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[20]  Siddhartha S. Srinivasa,et al.  A policy-blending formalism for shared control , 2013, Int. J. Robotics Res..

[21]  Ross A. Knepper,et al.  Asking for Help Using Inverse Semantics , 2014, Robotics: Science and Systems.

[22]  Yiannis Demiris,et al.  Head-Mounted Augmented Reality for Explainable Robotic Wheelchair Assistance , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[23]  Thomas B. Moeslund,et al.  Projecting robot intentions into human environments , 2016, 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[24]  Siddhartha S. Srinivasa,et al.  Robot Object Referencing through Legible Situated Projections , 2019, 2019 International Conference on Robotics and Automation (ICRA).

[25]  David Whitney,et al.  Communicating and controlling robot arm motion intent through mixed-reality head-mounted displays , 2019, Int. J. Robotics Res..

[26]  Jennifer Lee,et al.  Communicating Robot Motion Intent with Augmented Reality , 2018, 2018 13th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[27]  Ross A. Knepper,et al.  Implicit Communication in a Joint Action , 2017, 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI.

[28]  Thomas Hellström,et al.  Understandable robots - What, Why, and How , 2018, Paladyn J. Behav. Robotics.

[29]  Siddhartha S. Srinivasa,et al.  Legibility and predictability of robot motion , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[30]  Siddhartha S. Srinivasa,et al.  Shared autonomy via hindsight optimization for teleoperation and teaming , 2017, Int. J. Robotics Res..

[31]  Bilge Mutlu,et al.  Communication of Intent in Assistive Free Flyers , 2014, 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI).