Fusion of Odometry and Visual Datas to Localization a Mobile Robot Using Extended Kalman Filter

Applications involving wheeled mobile robots have been growing significantly in recent years thanks to its ability to move freely through space work, limited only by obstacles. Moreover, the wheels allow for greater convenience of transportation in environments plans and give greater support to the static robot. In the context of autonomous navigation of robots we highlight the localization problem. From an accumulated knowledge about the environment and using the current readings of the sensors, the robot must be able to determine and keep up its position and orientation in relation to this environment, even if the sensors have errors and / or noise. In other words, to localize a robot is necessary to determine its pose (position and orientation) in the workspace at a given time. Borenstein et al. (1997) have classified the localization methods in two great categories: relative localization methods, which give the robot’s pose relative to the initial one, and absolute localization methods, which indicate the global pose of the robot and do not need previously calculated poses. As what concerns wheel robots, it is common the use of encoders linked to wheel rotation axes, a technique which is known as odometry. However, the basic idea of odometry is the integration of the mobile information in a determined period of time, what leads to the accumulation of errors (Park et al., 1998). The techniques of absolute localization use landmarks to locate the robot. These landmarks can be artificial ones, when introduced in the environment aiming at assisting at the localization of the robot, or natural ones, when they can be found in the proper environment.

[1]  M. Buehler,et al.  Three-state Extended Kalman Filter for Mobile Robot Localization , 2002 .

[2]  Hobart R. Everett,et al.  Mobile robot positioning: Sensors and techniques , 1997, J. Field Robotics.

[3]  John F. Canny,et al.  A Computational Approach to Edge Detection , 1986, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Alex Zelinsky,et al.  Learning OpenCV---Computer Vision with the OpenCV Library (Bradski, G.R. et al.; 2008)[On the Shelf] , 2009, IEEE Robotics & Automation Magazine.

[5]  Matteo Matteucci,et al.  On the use of inverse scaling in monocular SLAM , 2009, 2009 IEEE International Conference on Robotics and Automation.

[6]  George K. I. Mann,et al.  Landmark detection and localization for mobile robot applications: a multisensor approach , 2009, Robotica.

[7]  Guojun Dai,et al.  Monocular vision SLAM for large scale outdoor environment , 2009, 2009 International Conference on Mechatronics and Automation.

[8]  T. Başar,et al.  A New Approach to Linear Filtering and Prediction Problems , 2001 .

[9]  Sebastian Thrun,et al.  Probabilistic robotics , 2002, CACM.

[10]  Jang Gyu Lee,et al.  Dead reckoning navigation of a mobile robot using an indirect Kalman filter , 1996, 1996 IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems (Cat. No.96TH8242).

[11]  Shin'ichi Yuta,et al.  A corridors lights based navigation system including path definition using a topologically corrected map for indoor mobile robots , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).