Real-time gesture recognition and robot control through blob tracking

This paper presents the framework on vision based interface that has been designed to instruct a humanoid robot through gestures using image processing. Image thresholding and blob detection techniques were used to obtain gestures. Then we analyze the images to recognize the gesture given by the user in front of a web camera and take an appropriate action (like taking picture, moving robot, etc). The application is developed using OpenCV (Open Computer Vision) libraries and Microsoft Visual C++. The gestures obtained by processing the live images are used to command a humanoid robot with simple capabilities. A commercial humanoid toy robot - Robosapien was used as the output module of the system. The robot was interfaced to computer by USB-UIRT (Universal Infrared Receiver and Transmitter) module.

[1]  Antonis A. Argyros,et al.  Vision-based Hand Gesture Recognition for Human-Computer Interaction , 2008 .

[2]  Jagdish Lal Raheja,et al.  Real-Time Robotic Hand Control Using Hand Gestures , 2010, 2010 Second International Conference on Machine Learning and Computing.

[3]  Vipul Honrao,et al.  Gesture Controlled Robot using Image Processing , 2013 .

[4]  Luc Van Gool,et al.  Haarlet-based hand gesture recognition for 3D interaction , 2009, 2009 Workshop on Applications of Computer Vision (WACV).

[5]  Tao Zhang,et al.  Adaptive visual gesture recognition for human-robot interaction using a knowledge-based software platform , 2007, Robotics Auton. Syst..

[6]  Peter Kulchyski and , 2015 .

[7]  Tom G. Zimmerman,et al.  A hand gesture interface device , 1987, CHI '87.

[8]  Mrinal K. Mandal,et al.  Efficient face and gesture recognition techniques for robot control , 2003, CCECE 2003 - Canadian Conference on Electrical and Computer Engineering. Toward a Caring and Humane Technology (Cat. No.03CH37436).