Fusion of Monocular Visual-Inertial Measurements for Three Dimensional Pose Estimation

This work describes a novel fusion schema to estimate the pose of a UAV using inertial sensors and a monocular camera. The visual motion algorithm is based on the plane induced homography using so called spectral features. The algorithm is able to operate with images presenting small amount of corner-like features, which gives more robustness to the state estimation. The key contribution of the paper is the use of this visual algorithm in a fusion schema with inertial sensors, exploiting the complementary properties of these two sensors. Results are presented in simulation with six degrees of freedom motion that satisfies dynamic constraints of a quadcopter. Virtual views are generated from this simulated motion cropped from a real floor image. Simulation results show that the presented algorithm would have enough precision to be used in an on-board algorithm to control the UAV in hovering operations.

[1]  Vijay Kumar,et al.  Tightly-coupled monocular visual-inertial fusion for autonomous flight of rotorcraft MAVs , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[2]  F. Markley Multiplicative Versus Additive Filtering for Spacecraft Attitude Determination , 2003 .

[3]  Hannes Sommer,et al.  Fusion of optical flow and inertial measurements for robust egomotion estimation , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Marc Pollefeys,et al.  Semi-direct EKF-based monocular visual-inertial odometry , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[5]  Libor Preucil,et al.  Exploration and Mapping Technique Suited for Visual-features Based Localization of MAVs , 2016, J. Intell. Robotic Syst..

[6]  F. Markley Attitude Error Representations for Kalman Filtering , 2003 .

[7]  Marc Pollefeys,et al.  Using vanishing points to improve visual-inertial odometry , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[8]  S. Shankar Sastry,et al.  An Invitation to 3-D Vision: From Images to Geometric Models , 2003 .

[9]  Claudio Paz,et al.  Visual homography-based pose estimation of a quadrotor using spectral features , 2015, 2015 Latin America Congress on Computational Intelligence (LA-CCI).

[10]  Vijay Kumar,et al.  Multi-sensor fusion for robust autonomous flight in indoor and outdoor environments with a rotorcraft MAV , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[11]  Jan Flusser,et al.  Image registration methods: a survey , 2003, Image Vis. Comput..

[12]  Stergios I. Roumeliotis,et al.  Stochastic cloning: a generalized framework for processing relative state measurements , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[13]  Stergios I. Roumeliotis,et al.  A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[14]  Claudio Paz,et al.  Quaternion-based Orientation Estimation Fusing a Camera and Inertial Sensors for a Hovering UAV , 2015, J. Intell. Robotic Syst..

[15]  Peter I. Corke,et al.  Robotics, Vision and Control - Fundamental Algorithms in MATLAB® , 2011, Springer Tracts in Advanced Robotics.