Active Visual Perception for Mobile Robot Localization

Localization is a key issue for a mobile robot, in particular in environments where a globally accurate positioning system, such as GPS, is not available. In these environments, accurate and efficient robot localization is not a trivial task, as an increase in accuracy usually leads to an impoverishment in efficiency and viceversa. Active perception appears as an appealing way to improve the localization process by increasing the richness of the information acquired from the environment. In this paper, we present an active perception strategy for a mobile robot provided with a visual sensor mounted on a pan-tilt mechanism. The visual sensor has a limited field of view, so the goal of the active perception strategy is to use the pan-tilt unit to direct the sensor to informative parts of the environment. To achieve this goal, we use a topological map of the environment and a Bayesian non-parametric estimation of robot position based on a particle filter. We slightly modify the regular implementation of this filter by including an additional step that selects the best perceptual action using Monte Carlo estimations. We understand the best perceptual action as the one that produces the greatest reduction in uncertainty about the robot position. We also consider in our optimization function a cost term that favors efficient perceptual actions. Previous works have proposed active perception strategies for robot localization, but mainly in the context of range sensors, grid representations of the environment, and parametric techniques, such as the extended Kalman filter. Accordingly, the main contributions of this work are: i) Development of a sound strategy for active selection of perceptual actions in the context of a visual sensor and a topological map; ii) Real time operation using a modified version of the particle filter and Monte Carlo based estimations; iii) Implementation and testing of these ideas using simulations and a real case scenario. Our results indicate that, in terms of accuracy of robot localization, the proposed approach decreases mean average error and standard deviation with respect to a passive perception scheme. Furthermore, in terms of efficiency, the active scheme is able to operate in real time without adding a relevant overhead to the regular robot operation.

[1]  Dana H. Ballard,et al.  Animate Vision , 1991, Artif. Intell..

[2]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[3]  Ruzena Bajcsy,et al.  Active and exploratory perception , 1992, CVGIP Image Underst..

[4]  Y. Aloimonos Active Perception , 1993 .

[5]  N. Gordon,et al.  Novel approach to nonlinear/non-Gaussian Bayesian state estimation , 1993 .

[6]  John K. Tsotsos,et al.  Modeling Visual Attention via Selective Tuning , 1995, Artif. Intell..

[7]  Wolfram Burgard,et al.  Active Mobile Robot Localization , 1997, IJCAI.

[8]  Sebastian Thrun,et al.  Coastal Navigation with Mobile Robots , 1999, NIPS.

[9]  Wolfram Burgard,et al.  Monte Carlo localization for mobile robots , 1999, Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C).

[10]  Sebastian Thrun,et al.  Probabilistic robotics , 2002, CACM.

[11]  David W. Murray,et al.  Simultaneous Localization and Map-Building Using Active Vision , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[12]  Nicholas Roy,et al.  Exponential Family PCA for Belief Compression in POMDPs , 2002, NIPS.

[13]  Robert B. Fisher,et al.  Object-based visual attention for computer vision , 2003, Artif. Intell..

[14]  Antonio Torralba,et al.  Using the Forest to See the Trees: A Graphical Model Relating Features, Objects, and Scenes , 2003, NIPS.

[15]  Alvaro Soto,et al.  Statistical Inference in Mapping and Localization for Mobile Robots , 2004, IBERAMIA.

[16]  Wolfram Burgard,et al.  Information Gain-based Exploration Using Rao-Blackwellized Particle Filters , 2005, Robotics: Science and Systems.

[17]  H. Isil Bozma,et al.  APES: Attentively Perceiving Robot , 2006, Auton. Robots.

[18]  Minoru Asada,et al.  How a mobile robot selects landmarks to make a decision based on an information criterion , 2006, Auton. Robots.

[19]  Laurent Itti,et al.  An Integrated Model of Top-Down and Bottom-Up Attention for Optimizing Detection Speed , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[20]  Joelle Pineau,et al.  Anytime Point-Based Approximations for Large POMDPs , 2006, J. Artif. Intell. Res..

[21]  Laurent Itti,et al.  Biologically-inspired robotics vision monte-carlo localization in the outdoor environment , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[22]  Andrea Vedaldi,et al.  Objects in Context , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[23]  A. Torralba,et al.  The role of context in object recognition , 2007, Trends in Cognitive Sciences.

[24]  Alvaro Soto,et al.  Unsupervised identification of useful visual landmarks using multiple segmentations and top-down feedback , 2008, Robotics Auton. Syst..

[25]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .