Visual coverage using autonomous mobile robots for search and rescue applications

This paper focuses on visual sensing of 3D large-scale environments. Specifically, we consider a setting where a group of robots equipped with a camera must fully cover a surrounding area. To address this problem we propose a novel descriptor for visual coverage that aims at measuring visual information of an area based on a regular discretization of the environment in voxels. Moreover, we propose an autonomous cooperative exploration approach which controls the robot movements so to maximize information accuracy (defined based on our visual coverage descriptor) and minimizing movement costs. Finally, we define a simulation scenario based on real visual data and on widely used robotic tools (such as ROS and Stage) to empirically evaluate our approach. Experimental results show that the proposed method outperforms a baseline random approach and an uncoordinated one, thus being a valid solution for visual coverage in large scale outdoor scenarios.

[1]  Joachim Hertzberg,et al.  An autonomous mobile robot with a 3D laser range finder for 3D exploration and digitalization of indoor environments , 2003, Robotics Auton. Syst..

[2]  Wei Liu,et al.  Evaluation of three local descriptors on low resolution images for robot navigation , 2009, 2009 24th International Conference Image and Vision Computing New Zealand.

[3]  A. Gasteratos,et al.  VIEW-FINDER : Robotics assistance to fire-fighting services and Crisis Management , 2009, 2009 IEEE International Workshop on Safety, Security & Rescue Robotics (SSRR 2009).

[4]  Wolfram Burgard,et al.  Autonomous exploration and mapping of abandoned mines , 2004, IEEE Robotics & Automation Magazine.

[5]  Mario Gianni,et al.  Rescue robots at earthquake-hit Mirandola, Italy: A field report , 2012, 2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR).

[6]  Gaurav S. Sukhatme,et al.  Adaptive teams of autonomous aerial and ground robots for situational awareness , 2007, J. Field Robotics.

[7]  A. Kak,et al.  A Look-up Table Based Approach for Solving the Camera Selection Problem in Large Camera Networks , 2006 .

[8]  Andreas Krause,et al.  Efficient Planning of Informative Paths for Multiple Robots , 2006, IJCAI.

[9]  Richard Szeliski,et al.  Bundle Adjustment in the Large , 2010, ECCV.

[10]  Peter F. Sturm,et al.  Multi-view geometry for general camera models , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[11]  G. Swaminathan Robot Motion Planning , 2006 .

[12]  Daniele Nardi,et al.  Multirobot systems: a classification focused on coordination , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[13]  Xiang Chen,et al.  Modeling Coverage in Camera Networks: A Survey , 2012, International Journal of Computer Vision.

[14]  Wolfram Burgard,et al.  Coordinated multi-robot exploration , 2005, IEEE Transactions on Robotics.

[15]  Mongi A. Abidi,et al.  Sensor planning for automated and persistent object tracking with multiple cameras , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[16]  Wolfram Burgard,et al.  Autonomous Exploration for 3D Map Learning , 2007, AMS.

[17]  K. S. Pedersen,et al.  A Comparative Study of Interest Point Performance on a Unique Data Set , 2011 .

[18]  Frank P. Ferrie,et al.  Autonomous Exploration: Driven by Uncertainty , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[19]  Alexander Kleiner,et al.  A frontier-void-based approach for autonomous exploration in 3d , 2011, 2011 IEEE International Symposium on Safety, Security, and Rescue Robotics.

[20]  Wolfram Burgard,et al.  Exploring Unknown Environments with Mobile Robots using Coverage Maps , 2003, IJCAI.

[21]  Francesco Amigoni,et al.  A Multi-Objective Exploration Strategy for Mobile Robots , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[22]  Henrik Aanæs,et al.  Interesting Interest Points , 2011, International Journal of Computer Vision.

[23]  Larry S. Davis,et al.  A General Method for Sensor Planning in Multi-Sensor Systems: Extension to Random Occlusion , 2007, International Journal of Computer Vision.

[24]  Brian Yamauchi,et al.  A frontier-based approach for autonomous exploration , 1997, Proceedings 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA'97. 'Towards New Computational Principles for Robotics and Automation'.

[25]  Axel Pinz,et al.  Fast and Globally Convergent Structure and Motion Estimation for General Camera Models , 2006, BMVC.

[26]  Wolfram Burgard,et al.  Information Gain-based Exploration Using Rao-Blackwellized Particle Filters , 2005, Robotics: Science and Systems.

[27]  Andrea Fusiello,et al.  Improving the efficiency of hierarchical structure-and-motion , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.