Towards A Multidimensional Perspective on Shared Autonomy

Shared Autonomy in the traditional sense focuses on the degree of user intervention in the control of artificial systems. We propose to broaden this notion to allow for more interactive scenarios. This requires a shift away from the single system perspective towards the interaction, the participating agents and the cooperation as such. Such a view on the interaction of autonomous agents has to be based on a more fine-grained understanding. Therefore, we extend a differentiation of autonomy into three different levels to interactive tasks as a starting point for a multidimensional perspective on shared autonomy. In particular, we want to point out how this allows for flexible interaction patterns and the negotiation of changing roles in ongoing cooperation.

[1]  Michael A. Goodrich,et al.  Human-Robot Interaction: A Survey , 2008, Found. Trends Hum. Comput. Interact..

[2]  Wolfram Burgard,et al.  Learning driving styles for autonomous vehicles from demonstration , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[3]  Sven Wachsmuth,et al.  Towards a model for anticipating human gestures in human-robot interactions in shared space , 2014 .

[4]  Takayuki Kanda,et al.  Human-Robot Interaction in Social Robotics , 2012 .

[5]  Wolfram Burgard,et al.  Multimodal deep learning for robust RGB-D object recognition , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[6]  Helge J. Ritter,et al.  Interaction skills for a coat-check robot: Identifying and handling the boundary components of clothes , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[7]  Wolfram Burgard,et al.  Deep learning for human part discovery in images , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[8]  Malte Schilling,et al.  Language-at all times: Action and interaction as contexts for enriching representations , 2016 .

[9]  Stefan Kopp,et al.  An architecture for fluid real-time conversational agents: integrating incremental output generation and input processing , 2013, Journal on Multimodal User Interfaces.

[10]  Pierre-Yves Oudeyer,et al.  An Alternative to Mapping a Word onto a Concept in Language Acquisition: Pragmatic Frames , 2016, Front. Psychol..

[11]  Thomas B. Sheridan,et al.  Human and Computer Control of Undersea Teleoperators , 1978 .

[12]  Michael A. Goodrich,et al.  Teleoperation and Beyond for Assistive Humanoid Robots , 2013 .

[13]  Srini Narayanan,et al.  Communicating with Executable Action Representations , 2013, AAAI Spring Symposium: Designing Intelligent Robots.

[14]  M R Endsley,et al.  Level of automation effects on performance, situation awareness and workload in a dynamic control task. , 1999, Ergonomics.

[15]  Katharina Hertkorn,et al.  Shared Grasping: a Combination of Telepresence and Grasp Planning , 2016 .

[16]  Terrence Fong,et al.  Collaboration, Dialogue, Human-Robot Interaction , 2001, ISRR.

[17]  Martin A. Riedmiller,et al.  Approximate real-time optimal control based on sparse Gaussian process models , 2014, 2014 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL).

[18]  Michael A. Goodrich,et al.  Experiments in adjustable autonomy , 2001, 2001 IEEE International Conference on Systems, Man and Cybernetics. e-Systems and e-Man for Cybernetics in Cyberspace (Cat.No.01CH37236).

[19]  Jeffrey M. Bradshaw,et al.  Dimensions of Adjustable Autonomy and Mixed-Initiative Interaction , 2003, Agents and Computational Autonomy.

[20]  D. Vernon Artificial Cognitive Systems: A Primer , 2014 .

[21]  Helge J. Ritter,et al.  A visuo-tactile control framework for manipulation and exploration of unknown objects , 2015, 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids).