3D Visibility Check in Webots for Human Perspective Taking in Human-Robot Interaction

The rapid development of intelligent robotics would facilitate humans and robots will live and work together at a human workspace in the near future. It means research on effective human-robot interaction is essential for future robotics. The most common situation of human-robot interaction is that humans and robots work cooperatively, and robots should give proper assistance to humans for achieving a goal. In the workspace there are several objects including tools and a robot should identify the human intended objects or tools. There might be situational differences between a robot’s perspective and a human perspective because of several obstacles in environment. Thus, a robot needs to take the human perspective and simulates the situation from the human perspective to identify the human intended object. For human perspective taking, first of all a robot needs to check its own visibility for the environment. To address this challenge, this paper develops a 3D visibility check method by using a depth image in Webots. By using the developed method, a robot can determine whether each point in the environment is visible or invisible at its posture and detect objects if they are visible.

[1]  Yiannis Demiris,et al.  Adaptive human-robot interaction in sensorimotor task instruction: From human to robot dance tutors , 2014, Robotics Auton. Syst..

[2]  Teodiano Freire Bastos Filho,et al.  Human-robot interaction based on wearable IMU sensor and laser range finder , 2014, Robotics Auton. Syst..

[3]  J. Gregory Trafton,et al.  Enabling effective human-robot interaction using perspective-taking in robots , 2005, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[4]  Anders Green,et al.  Social and collaborative aspects of interaction with a service robot , 2003, Robotics Auton. Syst..

[5]  Eric Maël,et al.  A sensor for dynamic tactile information with applications in human-robot interaction and object exploration , 2006, Robotics Auton. Syst..

[6]  Dieter Fox,et al.  RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments , 2010, ISER.

[7]  Víctor González-Pacheco,et al.  Integration of a low-cost RGB-D sensor in a social robot for gesture recognition , 2011, 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[8]  Masahiro Fujita,et al.  An ethological and emotional basis for human-robot interaction , 2003, Robotics Auton. Syst..

[9]  Shuichi Nishio,et al.  Telenoid android robot as an embodied perceptual social regulation medium engaging natural human-humanoid interaction , 2014, Robotics Auton. Syst..

[10]  Gamini Dissanayake,et al.  A robust RGB-D SLAM algorithm , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[11]  Sebastian Lang,et al.  Multi-modal anchoring for human-robot interaction , 2003, Robotics Auton. Syst..