Recognizing pointing behavior using image processing for human-robot interaction

This paper presents a gestural interaction system between human and robot using pointing movement. The virtual room, which was based on actual room, can be shown an attended object with blinking. A user pointed out an object, which he/she wants to move. The robot moves to an indicated point, or moves an indicated object to point in actual room. Moreover, to interact at actual rooms, the system that obtained RGB information of a pointed out object was developed. The system recognized a pointing gesture and obtained a feature of pointed out object by a image processing. Two sets of tracking modules comprised of a camera and a PC was prepared in a room. Three-dimensional coordinates of user's head and hand position were calculated using the data, which was obtained by tracking modules. The system recognized indicated point and infers the object, which the user attended. To show what the system attended, some information about the object was needed. Thus, in this research, we constructed RGB data obtaining module. The module obtained coordinates of indicated point and calculate the coordinates in picture. The system was developed using robot technology middleware (RTM). RT Middleware was developed by AIST (Agency of Industrial Science and Technology, Japan) for easily integrating robot systems by modularized software components. By constructing modules based on RT Middleware, we developed modules, which are functional elements to interact with human as components of easily system integration.

[1]  Yasufumi Takama,et al.  Intelligent space and human centered robotics , 2003, IEEE Trans. Ind. Electron..

[2]  Takashi Suehiro,et al.  RT-middleware: distributed component middleware for RT (robot technology) , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[3]  H. M. Karara,et al.  Direct Linear Transformation from Comparator Coordinates into Object Space Coordinates in Close-Range Photogrammetry , 2015 .

[4]  Richard A. Bolt,et al.  “Put-that-there”: Voice and gesture at the graphics interface , 1980, SIGGRAPH '80.

[5]  池上康男,et al.  特集:スポーツの映像とその処理 DLT法 , 1991 .

[6]  T. Yamaguchi,et al.  Suspicious Behavior Detection based on Case-Based Reasoning using Face Direction , 2006, 2006 SICE-ICASE International Joint Conference.

[7]  Songmin Jia,et al.  Network Distributed Monitoring System Supporting the Aged or Disabled , 2006 .

[8]  Shin'ichi Yuta,et al.  Operation direction to a mobile robot by projection lights , 2005, IEEE Workshop on Advanced Robotics and its Social Impacts, 2005..

[9]  Fumio Harashima,et al.  Humatronics (1) - natural interaction between human and networked robot using human motion recognition , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[10]  M. Niitsuma Interaction with Spatial Memory by Using Human Indicator , 2005 .

[11]  Takashi Suehiro,et al.  RT-Component Object Model in RT-Middleware - Distributed Component Middleware for RT (Robot Technology) , 2005, CIRA.