Hand Segmentation from Depth Image using Anthropometric Approach in Natural Interface Development

Hand gestures are often used as natural interface between human and robot. To acquire hand gesture from a captured image, hand segmentation procedure is performed. In this manuscript, a method for hand image segmentation from depth image is proposed. This method uses image thresholding technique to obtain human image part from a depth image. The threshold level is obtained by analyzing human posture dimension (anthropometry). By finding the centroid of the human image, left and right regions of human body can be separated. Assuming that each region has a hand image and the hand is positioned in front of its body, both hand images can be located. Hand segmentation is started by using an anthropometric data of hand pose. The data is used to compute the color values that represent the depth of hand in each image region. Thus, the acquired values are used as threshold for each image region. The thresholding operation resulted in completely segmented hand images. This proposed method has low computation time and works well when the basic assumptions are fulfilled.

[1]  Yael Edan,et al.  Real-time hand gesture telerobotic system using fuzzy c-means clustering , 2002, Proceedings of the 5th Biannual World Automation Congress.

[2]  Michael D. Abràmoff,et al.  Image processing with ImageJ , 2004 .

[3]  Ulrich Neumann,et al.  Real-time Hand Pose Recognition Using Low-Resolution Depth Images , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[4]  Yong Xu,et al.  An experiment study of gesture-based human-robot interface , 2007, 2007 IEEE/ICME International Conference on Complex Medical Engineering.

[5]  A Robotic Arm Telemanipulated through a Digital Glove , 2007, Electronics, Robotics and Automotive Mechanics Conference (CERMA 2007).

[6]  Ge Yan,et al.  Hand Gesture Recognition Using Neural Networks For Robotic Arm Control , 2007 .

[7]  I Dewa Putu Sutjana,et al.  Preliminary anthropometric data of medical students for equipment applications. , 2008, Journal of human ergology.

[8]  Alex Zelinsky,et al.  Learning OpenCV---Computer Vision with the OpenCV Library (Bradski, G.R. et al.; 2008)[On the Shelf] , 2009, IEEE Robotics & Automation Magazine.

[9]  Dieter Fox,et al.  A large-scale hierarchical multi-view RGB-D object dataset , 2011, 2011 IEEE International Conference on Robotics and Automation.

[10]  Ankit Chaudhary,et al.  Tracking of Fingertips and Centers of Palm Using KINECT , 2011, 2011 Third International Conference on Computational Intelligence, Modelling & Simulation.

[11]  Jake K. Aggarwal,et al.  Human detection using depth information by Kinect , 2011, CVPR 2011 WORKSHOPS.