Human and robotic planetary lander missions require accurate surface relative position knowledge to land near science targets or next to pre-deployed assets. In the absence of GPS, accurate position estimates can be obtained by automatically matching sensor data collected during descent to an on-board map. The Lander Vision System (LVS) that is being developed for Mars landing applications generates landmark matches in descent imagery and combines these with inertial data to estimate vehicle position, velocity and attitude. This paper describes recent LVS design work focused on making the map relative localization algorithms robust to challenging environmental conditions like bland terrain, appearance differences between the map and image and initial input state errors. Improved results are shown using data from a recent LVS field test campaign. This paper also fills a gap in analysis to date by assessing the performance of the LVS with data sets containing significant vertical motion including a complete data set from the Mars Science Laboratory mission, a Mars landing simulation, and field test data taken over multiple altitudes above the same scene. Accurate and robust performance is achieved for all data sets indicating that vertical motion does not play a significant role in position estimation performance.
[1]
A. Miguel San Martin,et al.
Mars Science Laboratory Entry, Descent and Landing System Development Challenges and Preliminary Flight Performance
,
2013
.
[2]
M. P. Golombek,et al.
Lander Vision System for Safe and Precise Entry Descent and Landing
,
2012
.
[3]
Andrew E. Johnson,et al.
Real-time Terrain Relative Navigation Test Results from a Relevant Environment for Mars Landing
,
2015
.
[4]
R. Rieder.
Concepts and Approaches for Mars Exploration
,
2010
.
[5]
P. Daniel Burkhart,et al.
Mars Science Laboratory Entry Descent and Landing Simulation Using DSENDS
,
2013
.