Naturally Occurring Gestures in a Human-Robot Teaching Scenario

This paper describes our general framework for the investigation of how human gestures can be used to facilitate the interaction and communication between humans and robots. More specifically, a study was carried out to reveal which "naturally occurring" gestures can be observed in a scenario where users had to explain to a robot how to perform a specific home task. The study followed a within-subjects design where ten participants had to demonstrate how to lay a table for two people using two different methods for their explanation: utilizing only gestures or gestures and speech. The experiments also served to validate a new coding scheme for human gestures in human-robot interaction, with good inter-rater reliability. Moreover, annotated video corpus was produced and characteristics such as frequency, duration, and co-occurrence of the different gestural classes have been gathered in order to capture requirements for the designers of HRI systems. The results regarding the frequencies of the different gestural types suggest an interaction between the order of presentation of the two methods and the actual type of gestures produced. Moreover, the results also suggest that there might be an interaction between the type of task and the type of gestures produced

[1]  Stefan Kopp,et al.  Trading Spaces: How Humans and Humanoids Use Speech and Gesture to Give Directions , 2007 .

[2]  Yasuhiro Katagiri,et al.  Graphical representation in graphical dialogue , 2002, Int. J. Hum. Comput. Stud..

[3]  Bum-Jae You,et al.  Gesture Recognition by Attention Control Method for Intelligent Humanoid Robot , 2005, KES.

[4]  Daniel V. Lawless,et al.  Scientific investigations, metaphorical gestures, and the emergence of abstract scientific concepts , 2002 .

[5]  Jacob L. Mey,et al.  Cognition and Technology: Co-existence, convergence and co-evolution , 2004 .

[6]  D. McNeill Hand and Mind , 1995 .

[7]  Chrystopher L. Nehaniv,et al.  Evaluation of robot imitation attempts: comparison of the system's and the human's perspectives , 2006, HRI '06.

[8]  C. Y. Thielman,et al.  Natural Language with Integrated Deictic and Graphic Gestures , 1989, HLT.

[9]  J. Cassell Computer Vision for Human–Machine Interaction: A Framework for Gesture Generation and Interpretation , 1998 .

[10]  A. Pentland,et al.  Computer Vision for Human–Machine Interaction: A Framework for Gesture Generation and Interpretation , 1998 .

[11]  Kerstin Dautenhahn,et al.  Looking Good? Appearance Preferences and Robot Personality Inferences at Zero Acquaintance , 2007, AAAI Spring Symposium: Multidisciplinary Collaboration for Socially Assistive Robotics.

[12]  Chrystopher L. Nehaniv,et al.  Sustaining interaction dynamics and engagement in dyadic child-robot interaction kinesics: lessons learnt from an exploratory study , 2005, ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005..

[13]  D. McNeill Gesture and Thought , 2005 .

[14]  Susan Goldin-Meadow,et al.  A Helping Hand in Assessing Children's Knowledge: Instructing Adults to Attend to Gesture , 2002 .

[15]  Susan Goldin-Meadow,et al.  Do parents lead their children by the hand? , 2005, Journal of Child Language.

[16]  James W. Davis,et al.  GESTURE RECOGNITION , 2023, International Research Journal of Modernization in Engineering Technology and Science.

[17]  Michael Kipp,et al.  Gesture generation by imitation: from human behavior to computer character animation , 2005 .

[18]  Susan Goldin-Meadow,et al.  Gesture offers insight into problem-solving in adults and children , 2002, Cogn. Sci..

[19]  Illah R. Nourbakhsh,et al.  A survey of socially interactive robots , 2003, Robotics Auton. Syst..

[20]  Toshi Takamori,et al.  Multi-Modal Interaction of Human and Home Robot in the Context of Room Map Generation , 2002, Auton. Robots.

[21]  Kristinn R. Thórisson,et al.  The Power of a Nod and a Glance: Envelope Vs. Emotional Feedback in Animated Conversational Agents , 1999, Appl. Artif. Intell..

[22]  Rüdiger Dillmann,et al.  Distribution and Recognition of Gestures in Human-Robot Interaction , 2006, ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication.

[23]  Susan Goldin-Meadow,et al.  Gesture Paves the Way for Language Development , 2005, Psychological science.

[24]  J. Cassell,et al.  Nudge nudge wink wink: elements of face-to-face conversation for embodied conversational agents , 2001 .

[25]  Kerstin Dautenhahn,et al.  The Art of Designing Socially Intelligent Agents: Science, Fiction, and the Human in the Loop , 1998, Appl. Artif. Intell..

[26]  K. Dautenhahn Robots we like to live with?! - a developmental perspective on a personalized, life-long robot companion , 2004, RO-MAN 2004. 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No.04TH8759).

[27]  D. McNeill,et al.  Speech-gesture mismatches: Evidence for one underlying representation of linguistic and nonlinguistic information , 1998 .

[28]  Pamela J. Hinds,et al.  Introduction to This Special Issue on Human-Robot Interaction , 2004, Hum. Comput. Interact..

[29]  R. Krauss,et al.  The Communicative Value of Conversational Hand Gesture , 1995 .

[30]  Barbara Tversky,et al.  Communicative Gestures Facilitate Problem Solving for Both Communicators and Recipients , 2007, Model-Based Reasoning in Science, Technology, and Medicine.

[31]  Richard A. Bolt,et al.  “Put-that-there”: Voice and gesture at the graphics interface , 1980, SIGGRAPH '80.

[32]  Susan Goldin-Meadow,et al.  Illuminating Mental Representations Through Speech and Gesture , 1999 .

[33]  Chrystopher L. Nehaniv,et al.  Action, State and Effect Metrics for Robot Imitation , 2006, ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication.

[34]  Anders Green,et al.  Social and collaborative aspects of interaction with a service robot , 2003, Robotics Auton. Syst..

[35]  Rachid Alami,et al.  A methodological approach relating the classification of gesture to identification of human intent in the context of human-robot interaction , 2005, ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005..

[36]  Susan Goldin-Meadow,et al.  Children Learn When Their Teacher's Gestures and Speech Differ , 2005, Psychological science.

[37]  Rüdiger Dillmann,et al.  Sensor fusion for 3D human body tracking with an articulated 3D body model , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..