Visual localization for asteroid touchdown operation based on local image features

In an asteroid sample-return mission, accurate position estimation of the spacecraft relative to the asteroid is essential for landing at the target point. During the missions of Hayabusa and Hayabusa2, the main part of the visual position estimation procedure was performed by human operators on the Earth based on a sequence of asteroid images acquired and sent by the spacecraft. Although this approach is still adopted in critical space missions, there is an increasing demand for automated visual position estimation, so that the time and cost of human intervention may be reduced. In this paper, we propose a method for estimating the relative position of the spacecraft and asteroid during the descent phase for touchdown from an image sequence using state-of-the-art techniques of image processing, feature extraction, and structure from motion. We apply this method to real Ryugu images that were taken by Hayabusa2 from altitudes of 20 km-500 m. It is demonstrated that the method has practical relevance for altitudes within the range of 5-1 km. This result indicates that our method could improve the efficiency of the ground operation in the global mapping and navigation during the touchdown sequence, whereas full automation and autonomous on-board estimation are beyond the scope of this study. Furthermore, we discuss the challenges of developing a completely automatic position estimation framework.

[1]  Takashi Kubota,et al.  SLAM-Based Navigation Scheme for Pinpoint Landing on Small Celestial Body , 2012, Adv. Robotics.

[2]  David G. Lowe,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004, International Journal of Computer Vision.

[3]  Makoto Yoshikawa,et al.  Hayabusa2 mission status: Landing, roving and cratering on asteroid Ryugu , 2020 .

[4]  Andrew Zisserman,et al.  Multiple View Geometry in Computer Vision (2nd ed) , 2003 .

[5]  Naoya Takeishi,et al.  Visual Monocular Localization, Mapping, and Motion Estimation of a Rotating Small Celestial Body , 2017, J. Robotics Mechatronics.

[6]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[7]  T. Morota,et al.  Initial inflight calibration for Hayabusa2 optical navigation camera (ONC) for science observations of asteroid Ryugu , 2018 .

[8]  Junichiro Kawaguchi,et al.  Vision-based guidance, navigation, and control of Hayabusa spacecraft - Lessons learned from real operation - , 2010 .

[9]  Naoya Takeishi,et al.  Evaluation of Interest-region Detectors and Descriptors for Automatic Landmark Tracking on Asteroids , 2015 .

[10]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[11]  Alexander May,et al.  Lessons learned from OSIRIS-REx autonomous navigation using natural feature tracking , 2017, 2017 IEEE Aerospace Conference.

[12]  T. Morota,et al.  Preflight Calibration Test Results for Optical Navigation Camera Telescope (ONC-T) Onboard the Hayabusa2 Spacecraft , 2017 .

[13]  Y. Tsuda,et al.  System design of the Hayabusa 2—Asteroid sample return mission to 1999 JU3 , 2013 .

[14]  Satoshi Tanaka,et al.  Hayabusa2-Ryugu proximity operation planning and landing site selection , 2018, Acta Astronautica.