Investigating the use of Non-verbal Cues in Human-Robot Interaction with a Nao robot

This paper discusses a new method to investigate the use of Non-Verbal Cues in Human-Robot Interaction with the Nao platform built from a number of sensors, controllers and programming interfaces. Using this platform a set of pilot experiments were carried out and conducted by 12 users. A multimodal corpus was recorded using several cameras and microphones placed on and around the robot. People were asked to interact with the Nao robot freely with some instructions of how to use the commands. A set of specific questions were asked for the feedback and evaluation. Preliminary results show that the non-verbal cues aid the human-robot interaction; furthermore we found that people were more likely to interact with a robot which is capable of utilizing non-verbal channels for understanding and communication.

[1]  Emer Gilmartin,et al.  Multimodal conversational interaction with a humanoid robot , 2012, 2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom).

[2]  Nick Campbell,et al.  Collecting multi-modal data of human-robot interaction , 2011, 2011 2nd International Conference on Cognitive Infocommunications (CogInfoCom).

[3]  Kristiina Jokinen,et al.  Multimodal Signals and Holistic Interaction Structuring , 2012, COLING.

[4]  Kristiina Jokinen,et al.  Constructive Interaction for Talking about Interesting Topics , 2012, LREC.

[5]  Paulo Menezes,et al.  Face tracking and hand gesture recognition for human-robot interaction , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[6]  E. Hall,et al.  The Hidden Dimension , 1970 .

[7]  Jeffery A. Jones,et al.  Visual Prosody and Speech Intelligibility , 2004, Psychological science.

[8]  Adam Kendon,et al.  Spacing and Orientation in Co-present Interaction , 2009, COST 2102 Training School.

[9]  Kristiina Jokinen,et al.  Emergent verbal behaviour in human-robot interaction , 2011, 2011 2nd International Conference on Cognitive Infocommunications (CogInfoCom).

[10]  Guido van Rossum,et al.  Python Programming Language , 2007, USENIX Annual Technical Conference.

[11]  W. Allen Audio-Visual Communication Research , 1956 .

[12]  Jens Allwood,et al.  Repeated head movements, their function and relation to speech , 2010 .

[13]  Kristiina Jokinen,et al.  Adding Speech to a Robotics Simulator , 2011 .

[14]  Paul A. Viola,et al.  Robust Real-Time Face Detection , 2001, Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001.

[15]  Nicu Sebe,et al.  Multimodal Human Computer Interaction: A Survey , 2005, ICCV-HCI.

[16]  Kristiina Jokinen,et al.  Integration of gestures and speech in human-robot interaction , 2012, 2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom).