Modeling Moderation for Multi-Party Socially Assistive Robotics

Socially Assistive Robotics (SAR) is the study and development of algorithms that enable robots to help people achieve their goals through social interaction, such as coaching, therapy, or companionship. One extension of this field to multi-party interactions is the problem of moderation, in which an agent supports a group in achieving its goals. We validate the use of robot moderator in an open-ended interaction. The implemented algorithm does not use speech recognition; the robot moderates the interaction based on who is speaking, not what is said. We show that such a SAR moderator is accepted into human-human interactions, and can improve group cohesion and increase participant speech in those interactions. A number of studies have directly examined the role of a moderator in online or teleconference contexts (e.g., [1]). In the virtual agents community, Bohus and Horvitz developed an approach to enable a virtual agent to take part in group conversations in open environments [2]. There has been substantial work on robot-centered interactions, where the robot provides a service to participants, such as a bartender [3] or game master [4]. Additionally, Mutlu et al. used a social robot to manipulate the roles of the participants [5] and Matsuyama et al. developed a framework for enabling an agent to integrate a participant into a conversation [6]. This work contributes an autonomous controller for a robot moderator, evaluated with participants in an open-ended discussion.

[1]  Maria Pateraki,et al.  Two people walk into a bar: dynamic multi-party social interaction with a robot agent , 2012, ICMI '12.

[2]  Tetsunori Kobayashi,et al.  Four-participant group conversation: A facilitation robot controlling engagement density as the fourth participant , 2015, Comput. Speech Lang..

[3]  Adam Setapen,et al.  Creating robotic characters for long-term interaction , 2012 .

[4]  T. Kanda,et al.  Measurement of negative attitudes toward robots , 2006 .

[5]  Tetsunori Kobayashi,et al.  Framework of Communication Activation Robot Participating in Multiparty Conversation , 2010, AAAI Fall Symposium: Dialog with Robots.

[6]  Ted Selker,et al.  Considerate Audio MEdiating Oracle (CAMEO): improving human-to-human communications in conference calls , 2012, DIS '12.

[7]  Mohammad Hossein Moattar,et al.  A simple but efficient real-time Voice Activity Detection algorithm , 2009, 2009 17th European Signal Processing Conference.

[8]  T. Wongpakaran,et al.  The Group Cohesiveness Scale (GCS) for psychiatric inpatients. , 2013, Perspectives in psychiatric care.

[9]  T. Kanda,et al.  A cross-cultural study on attitudes towards robots , 2005 .

[10]  O. John,et al.  Measuring personality in one minute or less: A 10-item short version of the Big Five Inventory in English and German , 2007 .

[11]  D. Traum,et al.  The UTEP-ICT Cross-Cultural Multiparty Multimodal Dialog Corpus , 2010 .

[12]  Takayuki Kanda,et al.  Footing in human-robot conversations: How robots might shape participant roles using gaze cues , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[13]  Eric Horvitz,et al.  Facilitating multiparty dialog with gaze, gesture, and speech , 2010, ICMI-MLMI '10.