Applications in HHI: Physical Cooperation

Humans critically depend on permanent verbal and nonverbal interaction - for aligning their mental states, for synchronizing their intentions and goals, and also for performing joint tasks, such as carrying a heavy object together, manipulating of objects in a common workspace, or handing over components and building or assembling larger structures in teams. Typically, physical interaction is initiated by a short joint planning dialog and then further accompanied by a stream of verbal utterances. For obtaining a smooth interaction flow in a given situation, humans typically use all their communication modalities and senses, and this often happens even unconsciously. As we move toward the introduction of robotic co-workers that serve humans - some of them will be humanoids; others will be of a different shape - humans will expect them to be integrated into the execution of the task at hand, just as well as if a human co-worker was involved. Such a flawless replacement will only be possible if these robots provide a number of basic action primitives, for example, handover from human to robot and vice versa. The robots must also recognize and anticipate the intention of the human by analyzing and understanding the scene as far as necessary for jointly working on the task. Most importantly, the robotic co-worker must be able to carry on a verbal and nonverbal dialog with the human partner, in parallel with and relating to the physical interaction process. In this chapter, we give an overview of the ingredients of an integrated physical interaction scenario. This includes methods to plan activities, to produce safe and human-interpretable motion, to interact through multimodal communication, to schedule actions for a joint task, and to align and synchronize the interaction by understanding human intentions. We summarize the state of the art in physical human-humanoid interaction systems and conclude by presenting three humanoid systems as case studies.

[1]  Fumio Kanehiro,et al.  Cooperative works by a human and a humanoid robot , 2003, 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422).

[2]  Alois Knoll,et al.  Extending the Knowledge of Volumes approach to robot task planning with efficient geometric predicates , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[3]  J. Kenneth Salisbury,et al.  Mechanical Design for Whole-Arm Manipulation , 1993 .

[4]  Ivan Lundberg,et al.  Robot concept for scalable, flexible assembly automation: A technology study on a harmless dual-armed robot , 2011, 2011 IEEE International Symposium on Assembly and Manufacturing (ISAM).

[5]  Jean Carletta,et al.  Eyetracking for two-person tasks with manipulation of a virtual world , 2010, Behavior research methods.

[6]  Alois Knoll,et al.  The roles of haptic-ostensive referring expressions in cooperative, task-based human-robot dialogue , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[7]  Alois Knoll,et al.  An Efficient and Robust Real-Time Contour Tracking System , 2006, Fourth IEEE International Conference on Computer Vision Systems (ICVS'06).

[8]  William C. Mann,et al.  Rhetorical Structure Theory: Toward a functional theory of text organization , 1988 .

[9]  Erwin Prassler,et al.  MORPHA: Communication and Interaction with Intelligent, Anthropomorphic Robot Assistants , 2001 .

[10]  Ronald C. Arkin,et al.  An Behavior-based Robotics , 1998 .

[11]  Mary Ellen Foster,et al.  Following Assembly Plans in Cooperative, Task-Based Human-Robot Dialogue , 2008 .

[12]  Markus Rickert,et al.  Efficient Motion Planning for Intuitive Task Execution in Modular Manipulation Systems , 2011 .

[13]  Alois Knoll,et al.  KVP: A knowledge of volumes approach to robot task planning , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[14]  Hirochika Inoue,et al.  Humanoid robotics platforms developed in HRP , 2004, Robotics Auton. Syst..

[15]  Robin L. Hill,et al.  Tuning accessibility of referring expressions in situated dialogue , 2014 .

[16]  Alois Knoll,et al.  Interacting in time and space: Investigating human-human and human-robot joint action , 2010, 19th International Symposium in Robot and Human Interactive Communication.

[17]  Fredrik Rehnmark,et al.  Robonaut: NASA's Space Humanoid , 2000, IEEE Intell. Syst..

[18]  Michi Henning,et al.  A new approach to object-oriented middleware , 2004, IEEE Internet Computing.

[19]  Jan Peter de Ruiter,et al.  The Interplay Between Gesture and Speech in the Production of Referring Expressions: Investigating the Tradeoff Hypothesis , 2012, Top. Cogn. Sci..

[20]  H. H. Clark,et al.  Referring as a collaborative process , 1986, Cognition.

[21]  Maria Pateraki,et al.  Two people walk into a bar: dynamic multi-party social interaction with a robot agent , 2012, ICMI '12.

[22]  Olivier Stasse,et al.  A versatile Generalized Inverted Kinematics implementation for collaborative working humanoid robots: The Stack Of Tasks , 2009, ICAR.

[23]  Alois Knoll,et al.  Human-robot interaction in handing-over tasks , 2008, RO-MAN 2008 - The 17th IEEE International Symposium on Robot and Human Interactive Communication.

[24]  Kazuhiro Kosuge,et al.  Mobile robot helper , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[25]  Alois Knoll,et al.  Social behavior recognition using body posture and head pose for human-robot interaction , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[26]  Xue Yan,et al.  iCat: an animated user-interface robot with personality , 2005, AAMAS '05.

[27]  Raymond H. Cuijpers,et al.  Joint Action: Neurocognitive Mechanisms Supporting Human Interaction , 2009, Top. Cogn. Sci..

[28]  J. Versace A Review of the Severity Index , 1971 .

[29]  Alois Knoll,et al.  A Naïve Bayes Classifier with Distance Weighting for Hand-Gesture Recognition , 2008 .

[30]  Ronald P. A. Petrick,et al.  Planning for Social Interaction in a Robot Bartender Domain , 2013, ICAPS.

[31]  Aude Billard,et al.  Task Parameterization Using Continuous Constraints Extracted From Human Demonstrations , 2015, IEEE Transactions on Robotics.

[32]  H. H. Clark Pointing and placing. , 2003 .

[33]  Robert Dale,et al.  Computational Interpretations of the Gricean Maxims in the Generation of Referring Expressions , 1995, Cogn. Sci..

[34]  Maria Pateraki,et al.  Comparing task-based and socially intelligent behaviour in a robot bartender , 2013, ICMI '13.

[35]  Alois Knoll,et al.  A Wait-free Realtime System for Optimal Distribution of Vision Tasks on Multicore Architectures , 2008, ICINCO-RA.

[36]  Stefan Schaal,et al.  Robot Programming by Demonstration , 2009, Springer Handbook of Robotics.

[37]  Alois Knoll,et al.  Evaluation of a novel biologically inspired trajectory generator in human-robot interaction , 2009, RO-MAN 2009 - The 18th IEEE International Symposium on Robot and Human Interactive Communication.

[38]  Andrea Lockerd Thomaz,et al.  Working collaboratively with humanoid robots , 2004, 4th IEEE/RAS International Conference on Humanoid Robots, 2004..

[39]  Fahiem Bacchus,et al.  Extending the Knowledge-Based Approach to Planning with Incomplete Information and Sensing , 2004, ICAPS.

[40]  Jianwei Zhang,et al.  Instructing cooperating assembly robots through situated dialogues in natural language , 1997, Proceedings of International Conference on Robotics and Automation.

[41]  Matthew M. Williamson,et al.  Series elastic actuators , 1995, Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots.

[42]  Amy Isard,et al.  Task-based evaluation of context-sensitive referring expressions in human–robot dialogue , 2014 .

[43]  Aaron Edsinger,et al.  Robot manipulation in human environments , 2007 .

[44]  Rolf Dieter Schraft,et al.  PowerMate – A Safe and Intuitive Robot Assistant for Handling and Assembly Tasks , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[45]  Ruud G. J. Meulenbroek,et al.  Joint-action coordination in transferring objects , 2007, Experimental Brain Research.

[46]  Estela Bicho,et al.  A dynamic field approach to goal inference and error monitoring for human-robot interaction , 2009 .

[47]  Alois Knoll,et al.  MultiML: a general purpose representation language for multimodal human utterances , 2008, ICMI '08.

[48]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[49]  C. Creider Hand and Mind: What Gestures Reveal about Thought , 1994 .

[50]  Alin Albu-Schäffer,et al.  Requirements for Safe Robots: Measurements, Analysis and New Insights , 2009, Int. J. Robotics Res..

[51]  Homayoon Kazerooni,et al.  Exoskeletons for Human Performance Augmentation , 2008, Springer Handbook of Robotics.

[52]  Albert-Ludwigs A Continual Multiagent Planning Approach to Situated Dialogue , 2008 .

[53]  Oussama Khatib,et al.  MOPL: A multi-modal path planner for generic manipulation tasks , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[54]  Manuel Giuliani,et al.  How can i help you': comparing engagement classification strategies for a robot bartender , 2013, ICMI '13.

[55]  David Traum,et al.  The Information State Approach to Dialogue Management , 2003 .

[56]  Robert O. Ambrose,et al.  Robonaut 2 - The first humanoid robot in space , 2011, 2011 IEEE International Conference on Robotics and Automation.

[57]  Alois Knoll,et al.  When to assist? - Modelling human behaviour for hybrid assembly systems , 2010, ISR/ROBOTIK.

[58]  Tomio Watanabe,et al.  A Study on Robot-Human System with Consideration of Individual Preferences : 2nd Report, Multimodal Human-Machine Interface for Object-Handing Robot System , 2006 .

[59]  J. D. Ruiter The production of gesture and speech , 2000 .

[60]  Susan Goldin-Meadow,et al.  Using the Hands to Identify Who Does What to Whom: Gesture and Speech Go Hand-in-Hand , 2009, Cogn. Sci..

[61]  Manuel Giuliani,et al.  Ghost-in-the-Machine reveals human social signals for human–robot interaction , 2015, Front. Psychol..

[62]  Alois Knoll,et al.  Combining goal inference and natural-language dialogue for human-robot joint action , 2008, ECAI 2008.

[63]  Alois Knoll,et al.  Integrating Language, Vision and Action for Human Robot Dialog Systems , 2007, HCI.

[64]  Sebastian Loth,et al.  Automatic detection of service initiation signals used in bars , 2013, Front. Psychol..

[65]  Charles C. Kemp,et al.  Two Arms Are Better Than One: A Behavior Based Control System for Assistive Bimanual Manipulation , 2007 .

[66]  Jeff Weber,et al.  MERTZ: a quest for a robust and scalable active vision humanoid head robot , 2004, 4th IEEE/RAS International Conference on Humanoid Robots, 2004..