Rule Extraction Method Considering Reliability for Synchronized Behavior of Group Robots in Multi-party Conversations

In this paper, we propose a rule extraction method that detects the contingency of human actions in multi-party conversations explicitly considering their reliabilities. In this study, we collect data concerning human actions in human-robots conversations and extract the contingency of the bystander’s actions using the proposed method. The results show that we can detect 21 rules with high prediction and variations of the bystander’s actions that are evaluated using an extended N-gram model adapted for actions. Through the analysis of social skills, we show that the proposed method can extract the rules reflected by the social skills of the bystander.

[1]  Yuichiro Yoshikawa,et al.  Responsive Robot Gaze to Interaction Partner , 2006, Robotics: Science and Systems.

[2]  Hiroshi Ishiguro,et al.  Evaluation of formant-based lip motion generation in tele-operated humanoid robots , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[3]  C. Kleinke Gaze and eye contact: a research review. , 1986, Psychological bulletin.

[4]  Samy Bengio,et al.  Modeling human interaction in meetings , 2003, 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03)..

[5]  Stefan Kopp,et al.  Generation and Evaluation of Communicative Robot Gesture , 2012, Int. J. Soc. Robotics.

[6]  Kazuki Sakai,et al.  Generation of Bystander Robot Actions Based on Analysis of Relative Probability of Human Actions , 2017, Journal of Advanced Computational Intelligence and Intelligent Informatics.

[7]  Yuichiro Yoshikawa,et al.  Effects of Observing Eye Contact between a Robot and Another Person , 2011, Int. J. Soc. Robotics.

[8]  David McNeill,et al.  Language and Gesture: Frontmatter , 2000 .

[9]  Hiroshi Sawada,et al.  Automatic inference of cross-modal nonverbal interactions in multiparty conversations: "who responds to whom, when, and how?" from gaze, head gestures, and utterances , 2007, ICMI '07.

[10]  Louis-Philippe Morency,et al.  The effect of head-nod recognition in human-robot conversation , 2006, HRI '06.

[11]  K. Otsuka,et al.  Automatic interface of cross-modal nonverbal interactions in multiparty conversation , 2007 .

[12]  Akio Kikuchi Notes on the Researches Using KiSS-18 , 2004 .

[13]  Yuichiro Yoshikawa,et al.  Nodding responses by collective proxy robots for enhancing social telepresence , 2014, HAI.

[14]  Tetsuo Ono,et al.  Android as a telecommunication medium with a human-like presence , 2007, 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI).