A novel inertial-aided feature detection model for autonomous navigation in planetary landing

Abstract This paper proposes a novel visual feature detection model for navigation applications in the context of autonomous planetary landing. A novel Hessian marker is developed in the first place, which aims at adapting to the affine distortion between descent image and navigational map, such design is established upon the SURF detection model. Then, an inertial-aided characteristic scale determination method is proposed based upon the proposed Hessian marker, leading to the complete inertial-aided visual feature detection model. Finally, a planet-representative simulation platform is developed for validating the proposed algorithm, the performance of proposed approach in terms of repeatability rate, robustness against the image affine distortion, and navigation accuracy is verified by comparing with the state-of-the-art algorithms under the specifically developed platform.

[1]  Shengying Zhu,et al.  Visual navigation using edge curve matching for pinpoint planetary landing , 2018 .

[2]  Karl Iagnemma,et al.  Self‐supervised terrain classification for planetary surface exploration rovers , 2012, J. Field Robotics.

[3]  Stergios I. Roumeliotis,et al.  Vision-Aided Inertial Navigation for Spacecraft Entry, Descent, and Landing , 2009, IEEE Transactions on Robotics.

[4]  Clark F. Olson,et al.  Optical landmark detection for spacecraft navigation , 2003 .

[5]  Augusto Caramagno,et al.  Design and Performance Assessment of Hazard Avoidance Techniques For Vision Based Landing , 2006 .

[6]  Shengying Zhu,et al.  An innovative navigation scheme of powered descent phase for Mars pinpoint landing , 2014 .

[7]  Keiken Ninomiya,et al.  Terrain shape recognition for celestial landing/rover missions from shade information , 1999 .

[8]  Hutao Cui,et al.  Vision-aided inertial navigation for pinpoint planetary landing , 2007 .

[9]  Erwin Mooij,et al.  A stereo-vision hazard-detection algorithm to increase planetary lander autonomy , 2016 .

[10]  Youguang Zhang,et al.  A Fast and Power-Efficient Hardware Architecture for Visual Feature Detection in Affine-SIFT , 2018, IEEE Transactions on Circuits and Systems I: Regular Papers.

[11]  Hutao Cui,et al.  Robust hazard matching approach for visual navigation application in planetary landing , 2015 .

[12]  Lin Jia,et al.  Shadow Areas Robust Matching Among Image Sequence in Planetary Landing , 2017 .

[13]  Michèle Lavagna,et al.  A multilayer perceptron hazard detector for vision-based autonomous planetary landing , 2016 .

[14]  Franz Andert,et al.  Lidar-Aided Camera Feature Tracking and Visual SLAM for Spacecraft Low-Orbit Navigation and Planetary Landing , 2015 .

[15]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[16]  Jean-Michel Morel,et al.  ASIFT: A New Framework for Fully Affine Invariant Image Comparison , 2009, SIAM J. Imaging Sci..

[17]  Adrien Bartoli,et al.  KAZE Features , 2012, ECCV.

[18]  Hutao Cui,et al.  A new approach based on crater detection and matching for visual navigation in planetary landing , 2014 .

[19]  Farzin Amzajerdian,et al.  LIDAR-Aided Inertial Navigation with Extended Kalman Filtering for Pinpoint Landing , 2009 .

[20]  Andrew E. Johnson,et al.  Machine vision for autonomous small body navigation , 2000, 2000 IEEE Aerospace Conference. Proceedings (Cat. No.00TH8484).

[21]  Gary Whitten,et al.  Machine Vision Techniques for Planetary Terminal Descent Hazard Avoidance and Landmark Tracking , 1991, 1991 American Control Conference.

[22]  Hutao Cui,et al.  Database construction for vision aided navigation in planetary landing , 2017 .

[23]  Larry H. Matthies,et al.  MER-DIMES: a planetary landing application of computer vision , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[24]  Eckehard G. Steinbach,et al.  Speeded-up SURF: Design of an efficient multiscale feature detector , 2013, 2013 IEEE International Conference on Image Processing.

[25]  Andrew E. Johnson,et al.  Computer Vision on Mars , 2007, International Journal of Computer Vision.

[26]  Bach Van Pham,et al.  Vision‐based absolute navigation for descent and landing , 2012, J. Field Robotics.

[27]  Luc Van Gool,et al.  SURF: Speeded Up Robust Features , 2006, ECCV.

[28]  Xiangyu Huang,et al.  Autonomous Navigation Based on Sequential Images for Planetary Landing in Unknown Environments , 2017 .

[29]  David G. Lowe,et al.  Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[30]  Stergios I. Roumeliotis,et al.  Vision-Aided Inertial Navigation for Precise Planetary Landing: Analysis and Experiments , 2007, Robotics: Science and Systems.

[31]  Christoforos Kanellakis,et al.  Survey on Computer Vision for UAVs: Current Developments and Trends , 2017, Journal of Intelligent & Robotic Systems.

[32]  Stergios I. Roumeliotis,et al.  Vision‐aided inertial navigation for pin‐point landing using observations of mapped landmarks , 2007, J. Field Robotics.

[33]  T. Voirin,et al.  Visual-inertial navigation for pinpoint planetary landing using scale-based landmark matching , 2016, Robotics Auton. Syst..

[34]  Puneet Singla,et al.  Autonomous navigation algorithm for precision landing on unknown planetary surfaces , 2008 .

[35]  Xiangyu Huang,et al.  Landmark-based autonomous navigation for pinpoint planetary landing , 2016 .