Visual SLAM for autonomous MAVs with dual cameras

This paper extends a monocular visual simultaneous localization and mapping (SLAM) system to utilize two cameras with non-overlap in their respective field of views (FOVs). We achieve using it to enable autonomous navigation of a micro aerial vehicle (MAV) in unknown environments. The methodology behind this system can easily be extended to multi-camera rigs, if the onboard computation capability allows this. We analyze the iterative optimizations for pose tracking and map refinement of the SLAM system in multicamera cases. This ensures the soundness and accuracy of each optimization update. Our method is more resistant to tracking failure than conventional monocular visual SLAM systems, especially when MAVs fly in complex environments. It also brings more flexibility to configurations of multiple cameras used onboard of MAVs. We demonstrate its efficiency with both autonomous flight and manual flight of a MAV. The results are evaluated by comparisons with ground truth data provided by an external tracking system.

[1]  Morgan Quigley,et al.  ROS: an open-source Robot Operating System , 2009, ICRA 2009.

[2]  Marc Pollefeys,et al.  PIXHAWK: A micro aerial vehicle design for autonomous flight using onboard computer vision , 2012, Auton. Robots.

[3]  Tom Drummond,et al.  Machine Learning for High-Speed Corner Detection , 2006, ECCV.

[4]  Andreas Zell,et al.  An Onboard Monocular Vision System for Autonomous Takeoff, Hovering and Landing of a Micro Aerial Vehicle , 2012, Journal of Intelligent & Robotic Systems.

[5]  Vijay Kumar,et al.  Vision-based state estimation for autonomous rotorcraft MAVs in complex environments , 2013, 2013 IEEE International Conference on Robotics and Automation.

[6]  G. Klein,et al.  Parallel Tracking and Mapping for Small AR Workspaces , 2007, 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality.

[7]  Roland Siegwart,et al.  Onboard IMU and monocular vision based control for MAVs in unknown in- and outdoor environments , 2011, 2011 IEEE International Conference on Robotics and Automation.

[8]  Roland Siegwart,et al.  Real-time metric state estimation for modular vision-inertial systems , 2011, 2011 IEEE International Conference on Robotics and Automation.

[9]  Andreas Zell,et al.  Autonomous Landing of MAVs on an Arbitrarily Textured Landing Site Using Onboard Monocular Vision , 2014, J. Intell. Robotic Syst..

[10]  Jose Luis Blanco,et al.  A tutorial on SE(3) transformation parameterizations and on-manifold optimization , 2012 .

[11]  Andreas Zell,et al.  On-board dual-stereo-vision for autonomous quadrotor navigation , 2013, 2013 International Conference on Unmanned Aircraft Systems (ICUAS).

[12]  Marc Pollefeys,et al.  Vision-based autonomous mapping and exploration using a quadrotor MAV , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[13]  Andrew J. Davison,et al.  SLAM-based automatic extrinsic calibration of a multi-camera rig , 2011, 2011 IEEE International Conference on Robotics and Automation.

[14]  Andreas Zell,et al.  Using depth in visual simultaneous localisation and mapping , 2012, 2012 IEEE International Conference on Robotics and Automation.

[15]  Zhiqiang Zheng,et al.  A robust omnidirectional vision sensor for soccer robots , 2011 .

[16]  Marc Pollefeys,et al.  Motion Estimation for Self-Driving Cars with a Generalized Camera , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[17]  Adam Harmat,et al.  Parallel Tracking and Mapping with Multiple Cameras on an Unmanned Aerial Vehicle , 2012, ICIRA.