Human to Robot Demonstrations of Routine Home Tasks:Acknowledgment and Response to the Robot’s Feedback

This paper investigates the possible role of the robot’s feedback in Human-Robot Interaction (HRI) from the human perspective, and attempts to highlight some important conceptual and practical issues such as the lack of explicitness and consistency on people’s demonstration strategies. More specifically, any changes that can be expected on the part of a human (teacher), in the teaching of a task, when a robot (student) declares that the given demonstration was not understood. The findings from such studies can help in turn, from a system perspective, towards the design of HRI systems that are able to better anticipate and behave according to human expectations. Partly intended as a replication and verification of a previous study, the everyday domestic task of setting a table, both in Japanese and in non-Japanese (or “western”) style, is taught to a humanoid robot by the participants of the currently conducted user study. The participant’s acknowledgment and responses to the robot’s feedback are discussed in regard to demonstration changes and consistency, based on a HRI gesture classification.

[1]  Yoshihiro Miyake,et al.  Timing control of utterance and gesture in interaction between human and humanoid robot , 2009, RO-MAN 2009 - The 18th IEEE International Symposium on Robot and Human Interactive Communication.

[2]  D. McNeill Hand and Mind , 1995 .

[3]  Rachid Alami,et al.  A methodological approach relating the classification of gesture to identification of human intent in the context of human-robot interaction , 2005, ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005..

[4]  Chrystopher L. Nehaniv,et al.  Action, State and Effect Metrics for Robot Imitation , 2006, ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication.

[5]  Andrea Lockerd Thomaz,et al.  Teachable robots: Understanding human teaching behavior to build more effective robot learners , 2008, Artif. Intell..

[6]  Illah R. Nourbakhsh,et al.  A survey of socially interactive robots , 2003, Robotics Auton. Syst..

[7]  Chrystopher L. Nehaniv,et al.  6th Ieee International Conference on Robot & Human Interactive Communication Issues in Human/robot Task Structuring and Teaching , 2022 .

[8]  Yoshihiro Miyake,et al.  Timing control of utterance and body motion in human-robot interaction , 2008, RO-MAN 2008 - The 17th IEEE International Symposium on Robot and Human Interactive Communication.

[9]  Yoshihiro Miyake,et al.  Co-creation in man-machine interaction , 2003, The 12th IEEE International Workshop on Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003..

[10]  J. Cassell,et al.  Nudge nudge wink wink: elements of face-to-face conversation for embodied conversational agents , 2001 .

[11]  Chrystopher L. Nehaniv,et al.  Human to robot demonstrations of routine home tasks: Exploring the role of the robot's feedback , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[12]  Kerstin Dautenhahn,et al.  The Art of Designing Socially Intelligent Agents: Science, Fiction, and the Human in the Loop , 1998, Appl. Artif. Intell..

[13]  Stefan Kopp,et al.  Trading Spaces: How Humans and Humanoids Use Speech and Gesture to Give Directions , 2007 .

[14]  Arne Jönsson,et al.  Wizard of Oz studies: why and how , 1993, IUI '93.

[15]  Suzanne Daneau,et al.  Action , 2020, Remaking the Real Economy.