Decision-Making Prediction for Human-Robot Engagement between Pedestrian and Robot Receptionist

Social robots have been providing a number of customer services lately. For example, they can take the place of people as receptionists. When people interact they predict each others decision-making from the others actions and take suitable actions for the sake of the other. However, taking such actions is difficult for modern social robots. Therefore, our initial aim is to solve the problem of how to predict decision-making in human-robot engagement. Choosing a reception system as a case study to approach this goal, we created a new model to predict who will use the system. The model contains a state transition function based on observation studies. We evaluated the model though a controlled experiment and a simulation experiment, using field data on prediction performance and mental effect. The experiment results lead us to believe that the model can predict the decisions of others sufficiently well. We also found that a pedestrian who will not talk with a robot receptionist suffers a negative emotion when the pedestrian is greeted by a robot, but a method we propose prevents this. We suggest that the word suitable is related to negative emotion, a part of usability. By using our model in the future we will attempt to find a method that will enable a robot to learn suitable actions by itself.

[1]  Marek P. Michalowski,et al.  A spatial model of engagement for a social robot , 2006, 9th IEEE International Workshop on Advanced Motion Control, 2006..

[2]  Eric Horvitz,et al.  Learning to Predict Engagement with a Spoken Dialog System in Open-World Settings , 2009, SIGDIAL Conference.

[3]  Candace L. Sidner,et al.  Where to look: a study of human-robot engagement , 2004, IUI '04.

[4]  Jörg Müller,et al.  The Audience Funnel: Observations of Gesture Based Interaction With Multiple Large Displays in a City Center , 2011, Int. J. Hum. Comput. Interact..

[5]  P. Ekman,et al.  Facial action coding system: a technique for the measurement of facial movement , 1978 .

[6]  J. Russell A circumplex model of affect. , 1980 .

[7]  Takayuki Kanda,et al.  Measuring Communication Participation to Initiate Conversation in Human–Robot Interaction , 2015, Int. J. Soc. Robotics.

[8]  Jan Peters,et al.  Reinforcement learning in robotics: A survey , 2013, Int. J. Robotics Res..

[9]  M. Bradley,et al.  Measuring emotion: the Self-Assessment Manikin and the Semantic Differential. , 1994, Journal of behavior therapy and experimental psychiatry.

[10]  Xiaojuan Ma,et al.  Sensing and Handling Engagement Dynamics in Human-Robot Interaction Involving Peripheral Computing Devices , 2017, CHI.

[11]  Gitte Lindgaard,et al.  Emotional Experiences and Quality Perceptions of Interactive Products , 2007, HCI.

[12]  E. Hall,et al.  The Hidden Dimension , 1970 .

[13]  Niklas Bergström,et al.  Modeling of natural human-robot encounters , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[14]  Satoshi Shioiri,et al.  Why Do We Move Our Head to Look at an Object in Our Peripheral Region? Lateral Viewing Interferes with Attentive Search , 2014, PloS one.