Preference Modeling of Spatial Description in Human-Robot Interaction

Spatial description plays an important role in the design of human-robot interaction systems for intelligent robots. In this paper, we model the preference of the types of spatial description by collecting spatial constructions in two groups of tabletop task experiments, where the participants use spatial constructions to instruct the partner (human/robot) to pick up an indicated object. The preference modeling process is implemented by analyzing the probabilistic distribution of different types of spatial description (including different reference frames) of these participants in five typical scenarios regarding the partners of human and robot, respectively. The results provide a basis for the design of collaborative robots when interacting with people and will help improve the efficiency of human-centered human-robot interaction.

[1]  J. Flavell,et al.  Young children's knowledge about visual perception: Further evidence for the Level 1–Level 2 distinction. , 1981 .

[2]  Markus Janczyk,et al.  Level 2 perspective taking entails two processes: evidence from PRP experiments. , 2013, Journal of experimental psychology. Learning, memory, and cognition.

[3]  Siddhartha S. Srinivasa,et al.  Spatial references and perspective in natural language instructions for collaborative manipulation , 2016, 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[4]  John D. Kelleher,et al.  Proximity in Context: An Empirically Grounded Computational Model of Proximity for Processing Topological Spatial Expressions , 2006, ACL.

[5]  Kerstin Fischer,et al.  Cognitive Modeling of Spatial Reference for Human-Robot Interaction , 2001, Int. J. Artif. Intell. Tools.

[6]  Xuan Zhao,et al.  Do people spontaneously take a robot's visual perspective? , 2015, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[7]  Robert Dale,et al.  Computational Interpretations of the Gricean Maxims in the Generation of Referring Expressions , 1995, Cogn. Sci..

[8]  B. Tversky,et al.  Perspective in Spatial Descriptions , 1996 .

[9]  Michael F. Schober,et al.  Spatial Dialogue between Partners with Mismatched Abilities , 2009, Spatial Language and Dialogue.

[10]  J. Gregory Trafton,et al.  Enabling effective human-robot interaction using perspective-taking in robots , 2005, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[11]  John D. Kelleher,et al.  Incremental Generation of Spatial Referring Expressions in Situated Dialog , 2006, ACL.

[12]  Dana Samson,et al.  Unintentional perspective-taking calculates whether something is seen, but not how it is seen , 2016, Cognition.

[13]  Jeffrey M. Zacks,et al.  Two kinds of visual perspective taking , 2006, Perception & psychophysics.

[14]  Christoph Bartneck,et al.  Anthropomorphism: Opportunities and Challenges in Human–Robot Interaction , 2014, International Journal of Social Robotics.

[15]  M. Hegarty,et al.  Spatial perspective taking: Effects of social, directional, and interactive cues , 2019, Memory & Cognition.

[16]  Rachid Alami,et al.  Towards a Task-Aware Proactive Sociable Robot Based on Multi-state Perspective-Taking , 2013, Int. J. Soc. Robotics.

[17]  Yiannis Demiris,et al.  Markerless perspective taking for humanoid robots in unconstrained environments , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[18]  Kerstin Fischer,et al.  The Role of Users' Concepts of the Robot in Human-Robot Spatial Instruction , 2006, Spatial Cognition.

[19]  Marjorie Skubic,et al.  Strategies for Human-Driven Robot Comprehension of Spatial Descriptions by Older Adults in a Robot Fetch Task , 2014, Top. Cogn. Sci..

[20]  Rachid Alami,et al.  Solving ambiguities with perspective taking , 2010, 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[21]  Yu Tian,et al.  Recent Development of Human-Robot Natural Interaction in Spatial Cognition Tasks , 2016, 2016 8th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC).