Robust camera pose and scene structure analysis for service robotics

Successful path planning and object manipulation in service robotics applications rely both on a good estimation of the robot's position and orientation (pose) in the environment, as well as on a reliable understanding of the visualized scene. In this paper a robust real-time camera pose and a scene structure estimation system is proposed. First, the pose of the camera is estimated through the analysis of the so-called tracks. The tracks include key features from the imaged scene and geometric constraints which are used to solve the pose estimation problem. Second, based on the calculated pose of the camera, i.e. robot, the scene is analyzed via a robust depth segmentation and object classification approach. In order to reliably segment the object's depth, a feedback control technique at an image processing level has been used with the purpose of improving the robustness of the robotic vision system with respect to external influences, such as cluttered scenes and variable illumination conditions. The control strategy detailed in this paper is based on the traditional open-loop mathematical model of the depth estimation process. In order to control a robotic system, the obtained visual information is classified into objects of interest and obstacles. The proposed scene analysis architecture is evaluated through experimental results within a robotic collision avoidance system.

[1]  H. Hirschmüller Ieee Transactions on Pattern Analysis and Machine Intelligence 1 Stereo Processing by Semi-global Matching and Mutual Information , 2022 .

[2]  Heiko Hirschmüller,et al.  Stereo Processing by Semiglobal Matching and Mutual Information , 2008, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  Bir Bhanu,et al.  Closed-Loop Object Recognition Using Reinforcement Learning , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Majid Mirmehdi,et al.  Feedback control strategies for object recognition , 1999, IEEE Trans. Image Process..

[5]  Manolis I. A. Lourakis,et al.  SBA: A software package for generic sparse bundle adjustment , 2009, TOMS.

[6]  Rafael C. González,et al.  Local Determination of a Moving Contrast Edge , 1985, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  Neil J. Gordon,et al.  A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking , 2002, IEEE Trans. Signal Process..

[8]  José L. Marroquín,et al.  Robust approach for disparity estimation in stereo vision , 2004, Image Vis. Comput..

[9]  Danica Kragic,et al.  Advances in robot vision , 2005, Robotics Auton. Syst..

[10]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[11]  Simon Lacroix,et al.  Vision-Based SLAM: Stereo and Monocular Approaches , 2007, International Journal of Computer Vision.

[12]  Sorin M. Grigorescu Robust Machine Vision for Service Robotics , 2010 .

[13]  R. Bajcsy Active perception , 1988 .

[14]  Danijela Ristic-Durrant,et al.  ROVIS: Robust machine vision for service robotic system FRIEND , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[15]  Nico Blodow,et al.  Towards 3D Point cloud based object maps for household environments , 2008, Robotics Auton. Syst..

[16]  Adrian Kaehler,et al.  Learning opencv, 1st edition , 2008 .

[17]  Dae-Jin Kim,et al.  An empirical study with simulated ADL tasks using a vision-guided assistive robot arm , 2009, 2009 IEEE International Conference on Rehabilitation Robotics.

[18]  Axel Gräser,et al.  Robust feature extraction for 3D reconstruction of boundary segmented objects in a robotic Library scenario , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[19]  Qiang Zhou,et al.  Adaptive object detection and recognition based on a feedback strategy , 2006, Image Vis. Comput..

[20]  John A. Marchant,et al.  Model-based control of image acquisition , 2003, Image Vis. Comput..

[21]  Bruce H. Thomas,et al.  Measuring ARTootKit accuracy in long distance tracking experiments , 2002, The First IEEE International Workshop Agumented Reality Toolkit,.

[22]  Greg Welch,et al.  Welch & Bishop , An Introduction to the Kalman Filter 2 1 The Discrete Kalman Filter In 1960 , 1994 .

[23]  Andrew J. Davison,et al.  Guest Editorial Special Issue on Visual SLAM , 2008 .

[24]  Xiaoming Hu,et al.  Reactive mobile manipulation using dynamic trajectory tracking , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[25]  Kartik B. Ariyur,et al.  Real-Time Optimization by Extremum-Seeking Control , 2003 .

[26]  Lina María Paz,et al.  Large-Scale 6-DOF SLAM With Stereo-in-Hand , 2008, IEEE Transactions on Robotics.

[27]  Darius Burschka,et al.  Advances in Computational Stereo , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[28]  Seth Hutchinson,et al.  Visual Servo Control Part I: Basic Approaches , 2006 .

[29]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[30]  François Chaumette,et al.  Visual servo control. I. Basic approaches , 2006, IEEE Robotics & Automation Magazine.

[31]  Jan Mayer,et al.  A numerical evaluation of preprocessing and ILU-type preconditioners for the solution of unsymmetric sparse linear systems using iterative methods , 2009, TOMS.