User Interfaces for Human-Robot Interaction in Field Robotics

This chapter proposes thirty-two guidelines for pro-actively building a good human-robot user interface and illustrates them through case studies of two technological mature ImPACT Tough Robotics Challenge (TRC) systems: the cyber K9 and construction robot projects. A designer will likely have to build three notably different interfaces at different points in the development process: a diagnostic interface for developers to monitor and debug the robot using their expert knowledge, an end-user interface which is tailored to the tasks and decisions that operator and knowledge workers must execute, and an explicative interface to enable the public to visualize the important scientific achievements afforded by the robot system. The thirty-two guidelines are synthesized from the human-computer interaction, human-robot interaction, and computer supported coordinated work (CSCW) groups communities are clustered around four general categories: roles, layout appropriateness, the Four C’s (content, comparison,l coordination, color), and general interaction with, and through, the display.

[1]  Catherine M. Burns,et al.  Ecological Interface Design , 2004 .

[2]  Robin R. Murphy,et al.  Disaster Robotics , 2014, Springer Handbook of Robotics, 2nd Ed..

[3]  Robin R. Murphy,et al.  From remote tool to shared roles , 2008, IEEE Robotics & Automation Magazine.

[4]  Richard H. Hall,et al.  Usability in multiple monitor displays , 2008, DATB.

[5]  Benjamin B. Bederson,et al.  A review of overview+detail, zooming, and focus+context interfaces , 2009, CSUR.

[6]  Thomas Marrinan,et al.  Mixed Presence Collaboration using Scalable Visualizations in Heterogeneous Display Spaces , 2017, CSCW.

[7]  Robin R. Murphy,et al.  Human-robot interactions during the robot-assisted urban search and rescue response at the World Trade Center , 2003, IEEE Trans. Syst. Man Cybern. Part B.

[8]  Tim Pattison,et al.  View Coordination Architecture for Information Visualisation , 2001, InVis.au.

[9]  R.R. Murphy,et al.  Human-robot interaction in USAR technical search: two heads are better than one , 2004, RO-MAN 2004. 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No.04TH8759).

[10]  K. J. Vicente,et al.  Cognitive Work Analysis: Toward Safe, Productive, and Healthy Computer-Based Work , 1999 .

[11]  Chris North,et al.  A comparison of two display models for collaborative sensemaking , 2013, PerDis '13.

[12]  Michael A. Goodrich,et al.  Ecological Interfaces for Improving Mobile Robot Teleoperation , 2007, IEEE Transactions on Robotics.

[13]  Woohun Lee,et al.  Design guidelines for map-based human–robot interfaces: A colocated workspace perspective , 2007 .

[14]  Allison Woodruff,et al.  Guidelines for using multiple views in information visualization , 2000, AVI '00.

[15]  Christian Peter Klingenberg,et al.  Visualizations in geometric morphometrics: How to read and how to make graphs showing shape changes , 2013 .

[16]  Ben Shneiderman,et al.  Layout appropriateness: guiding user interface design with simple task descriptions , 1993 .

[17]  Mark McKenney,et al.  The CMR model of moving regions , 2014, IWGS.

[18]  Jeffrey M. Bradshaw,et al.  Ten Challenges for Making Automation a "Team Player" in Joint Human-Agent Activity , 2004, IEEE Intell. Syst..

[19]  John T. Stasko,et al.  Consistency, multiple monitors, and multiple windows , 2007, CHI.

[20]  Christopher D. Wickens,et al.  Multiple Resources and Mental Workload , 2008, Hum. Factors.

[21]  Felix Naumann,et al.  Data fusion , 2009, CSUR.

[22]  Niels Henze,et al.  Screen arrangements and interaction areas for large display work places , 2016, PerDis.

[23]  G. A. Miller THE PSYCHOLOGICAL REVIEW THE MAGICAL NUMBER SEVEN, PLUS OR MINUS TWO: SOME LIMITS ON OUR CAPACITY FOR PROCESSING INFORMATION 1 , 1956 .

[24]  Steven K. Feiner,et al.  Content-aware layout , 2007, CHI Extended Abstracts.

[25]  Robin R. Murphy,et al.  Crew roles and operational protocols for rotary-wing micro-UAVs in close urban environments , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[26]  Colin Ware,et al.  An evaluation of methods for linking 3D views , 2003, I3D '03.

[27]  Jakob Nielsen,et al.  Enhancing the explanatory power of usability heuristics , 1994, CHI '94.

[28]  Jonathan Grudin,et al.  Partitioning digital worlds: focal and peripheral awareness in multiple monitor use , 2001, CHI.

[29]  Ben Shneiderman,et al.  Designing the User Interface: Strategies for Effective Human-Computer Interaction , 1998 .

[30]  Robin R. Murphy,et al.  On the Human–Machine Interaction of Unmanned Aerial System Mission Specialists , 2013, IEEE Transactions on Human-Machine Systems.

[31]  Thecla Schiphorst,et al.  MoComp: A Tool for Comparative Visualization between Takes of Motion Capture Data , 2016, MOCO.

[32]  Rüdiger Westermann,et al.  Visualizing the central tendency of ensembles of shapes , 2016, SIGGRAPH Asia Symposium on Visualization.

[33]  Michael A. Goodrich,et al.  Towards combining UAV and sensor operator roles in UAV-enabled visual search , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[34]  Young J. Kim,et al.  Interactive generalized penetration depth computation for rigid and articulated models using object norm , 2014, ACM Trans. Graph..

[35]  K. Ravindranath,et al.  LAYOUT DESIGN OF USER INTERFACE COMPONENTS WITH MULTIPLE OBJECTIVES , 2004 .

[36]  Hans-Peter Seidel,et al.  Relating shapes via geometric symmetries and regularities , 2014, ACM Trans. Graph..