Assessing Multimodal Interactions with Mixed-Initiative Teams

The state-of-the-art in robotics is advancing to support the warfighters’ ability to project force and increase their reach across a variety of future missions. Seamless integration of robots with the warfighter will require advancing interfaces from teleoperation to collaboration. The current approach to meeting this requirement is to include human-to-human communication capabilities in tomorrow’s robots using multimodal communication. Though advanced, today’s robots do not yet come close to supporting teaming in dismounted military operations, and therefore simulation is required for developers to assess multimodal interfaces in complex multi-tasking scenarios. This paper describes existing and future simulations to support assessment of multimodal human-robot interaction in dismounted soldier-robot teams.

[1]  Lisa A. Parr,et al.  Perceptual biases for multimodal cues in chimpanzee (Pan troglodytes) affect recognition , 2004, Animal Cognition.

[2]  Florian Jentsch,et al.  Field Assessment of Multimodal Communication for Dismounted Human-Robot Teams , 2015 .

[3]  Matthew R. Walter,et al.  A multimodal interface for real-time soldier-robot teaming , 2016, SPIE Defense + Security.

[4]  Marti A. Hearst Trends & Controversies: Mixed-initiative interaction , 1999, IEEE Intell. Syst..

[5]  Jan B. F. van Erp,et al.  Multimodal Guidance for Land Navigation , 2007 .

[6]  Grant Taylor,et al.  Adaptive Automation as a Task Switching and Task Congruence Challenge , 2011 .

[7]  Roope Raisamo,et al.  Multimodal human-computer interaction: a constructive and empirical study , 1999 .

[8]  Tami Griffith,et al.  Leveraging a Virtual Environment to Prepare for School Shootings , 2017, HCI.

[9]  Bo Sun,et al.  The mixed-initiative experimental testbed for collaborative human robot interactions , 2008, 2008 International Symposium on Collaborative Technologies and Systems.

[10]  Jean Oh,et al.  US Army Research Laboratory (ARL) Robotics Collaborative Technology Alliance 2014 Capstone Experiment , 2016 .

[11]  Zhihua Qu,et al.  RoboLeader: an agent for supervisory control of multiple robots , 2010, HRI 2010.

[12]  Daniel J. Barber,et al.  Toward a Tactile Language for Human–Robot Interaction , 2015, Hum. Factors.

[13]  Daniel J. Barber,et al.  Feasibility of Wearable Fitness Trackers for Adapting Multimodal Communication , 2017, HCI.

[14]  Joëlle Coutaz,et al.  A design space for multimodal systems: concurrent processing and data fusion , 1993, INTERCHI.

[15]  Volker Graefe,et al.  Dependable multimodal communication and interaction with robotic assistants , 2002, Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication.

[16]  Jessie Y. C. Chen,et al.  Adaptive Automation Effects on Operator Performance during a Reconnaissance Mission with an Unmanned Ground Vehicle , 2010 .

[17]  P. Marler,et al.  Communication Goes Multimodal , 1999, Science.

[18]  Lauren Reinerman-Jones,et al.  Visual and tactile interfaces for bi-directional human robot communication , 2013, Defense, Security, and Sensing.

[19]  Norman I. Badler,et al.  Defining Next-Generation Multi-Modal Communication in Human Robot Interaction , 2011 .

[20]  Kristin E. Schaefer,et al.  Outcomes from the First Wingman Software-in-the-Loop Integration Event: January 2017 , 2017 .

[21]  Jessie Y. C. Chen,et al.  Effectiveness of Concurrent Performance of Military and Robotics Tasks and Effects of Cueing in a Simulated Multi-Tasking Environment , 2008 .

[22]  Martial Hebert,et al.  Integrated Intelligence for Human-Robot Teams , 2016, ISER.