Utilization of Depth and Color Information in Mobile Robotics

Computer vision plays an increasing role in robotics, as the computing power of modern computers grows year by year allowing more advanced algorithms to be implemented. Along with visual information, depth is also widely used in navigation of mobile robots, for example for obstacle detection. As cheap depth sensors become popular nowadays, there is a possibility to use data from both sources to further enhance the processes of navigation and object detection. This article presents some possibilities of utilizing in mobile robotics the integrated video and depth images - by performing image segmentation for environment description, optical flow estimation for obstacle avoidance and object detection for semantic map creation. All of the presented examples are based on real, working applications, which additionally proves validity of proposed methods.

[1]  Maciej Stefanczyk,et al.  Multimodal Segmentation of Dense Depth Maps and Associated Color Information , 2012, ICCVG.

[2]  Włodzimierz Kasprzak,et al.  A method for discrete self-localization using image analysis , 2002, Proceedings of the Third International Workshop on Robot Motion and Control, 2002. RoMoCo '02..

[3]  Kurt Konolige,et al.  Projected texture stereo , 2010, 2010 IEEE International Conference on Robotics and Automation.

[4]  Boyu Wei,et al.  Mobile robot vision system based on linear structured light and DSP , 2009, 2009 International Conference on Mechatronics and Automation.

[5]  J. Giles Inside the race to hack the Kinect , 2010 .

[6]  Berthold K. P. Horn,et al.  Determining Optical Flow , 1981, Other Conferences.

[7]  Daniel Cremers,et al.  Real-time visual odometry from dense RGB-D images , 2011, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops).

[8]  Andrzej Kasiński,et al.  Vision-Based Mobile Robot Localization with Simple Artificial Landmarks , 2003 .

[9]  Gaurav S. Sukhatme,et al.  Detecting Moving Objects using a Single Camera on a Mobile Robot in an Outdoor Environment , 2004 .

[10]  S. Inokuchi,et al.  Range-imaging system for 3-D object recognition , 1984 .

[11]  Reinhard Koch,et al.  Pose estimation and map building with a Time-Of-Flight-camera for robot navigation , 2008, Int. J. Intell. Syst. Technol. Appl..

[12]  James R. Bergen,et al.  Visual odometry , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[13]  Wolfram Burgard,et al.  Real-time 3D visual SLAM with a hand-held camera , 2011 .

[14]  Jianwei Zhang,et al.  Vision Processing for Realtime 3-D Data Acquisition Based on Coded Structured Light , 2008, IEEE Transactions on Image Processing.

[15]  Robert C. Bolles,et al.  Outdoor Mapping and Navigation Using Stereo Vision , 2006, ISER.

[16]  Joachim Hertzberg,et al.  A 3D laser range finder for autonomous mobile robots , 2001 .