Passive Optical Terrain Relative Navigation Using APLNav

The NASA autonomous precision landing and hazard avoidance technology (ALHAT) program is developing an autonomous precision landing system capable of landing a spacecraft on the Moon. To achieve the desired 90 m (3sigma) landing accuracy terrain relative navigation (TRN) is necessary. The Johns Hopkins University Applied Physics Laboratory (APL) is developing a TRN system using passive optical terrain sensing for lunar, asteroid and comet landing applications. The system is derived from the digital scene-mapping and area correlation (DSMAC) navigation method that has been in use operationally by cruise missiles over the past three decades. APL was instrumental in DSMAC development and is now employing its expertise in visual-aided navigation techniques to prototype the autonomous precision landing navigation (APLNav) system. The APLNav system allows landing in illuminated conditions - including the low illumination conditions existing at the lunar poles. The APLNav system uses multiple digital cameras to image the lunar surface. The sensed image is correlated with the stored reference. The location in the map of the maximum correlation value corresponds to the location of the spacecraft at the time the image was taken. This location is used to correct the accumulated error in the spacecraft's inertial navigation system. APL is preparing a prototype for use in ALHAT sponsored field tests. In each test, imagery of a surveyed landscape will be collected for assessment of APLNav algorithm performance. This paper describes the APLNav system, an assessment of its performance, and descriptions of the work being performed to bring the system to Technology Readiness Level 6, system demonstration in a relevant environment, by 2011.