The Optical Navigation (OPNAV) Toolkit is a collection of software tools to assist navigators and automate some of the processes involved in obtaining OPNAV results that are typically done in several stages. The toolkit is flexible and designed to work with the navigation analysts’ efforts to produce optical navigation results in less time and provide additional feedback that is useful and beneficial to analysts. Introduction. Optical navigation (OPNAV) has been an integral element of deep space exploration since its landmark use during the Voyager 1 & 2 flybys of Jupiter in 1979. The mainstream techniques used for OPNAV today (e.g., [3]) are a derivative of this early work. These methods have been widely deployed with great success, as exemplified by Cassini and New Horizons. These existing OPNAV pipelines for robotic space exploration are historically implemented within a ground-in-the-loop paradigm, where images are downlinked and processed by analysts on Earth. The resulting OPNAV observables are combined with radiometric data to estimate the spacecraft trajectory. This paper summarizes preliminary work on a new Optical Navigation (OPNAV) Toolkit intended to modernize some of the OPNAV functions commonly performed during ground-based OPNAV operations. Specifically, new horizon-based OPNAV algorithms have recently been developed and have been shown to provide superior performance to the legacy OPNAV techniques on Cassini data. We implement these algorithms within the MATLAB environment in a flexible and modular fashion, which allows new OPNAV pipelines to be easily built to solve various real-world problems. While these algorithms could also be used for autonomous on-board navigation, the present work primarily focuses on a toolkit to aid human analysts (who, for now, are residing on Earth) performing OPNAV tasks. The OPNAV Toolkit contains a variety of functions designed to aid the user in performing common OPNAV tasks. The user may choose to follow a sample workflow, or they may wish to use a select set of functions for a specific purpose. The toolkit capability and components are now discussed in more detail. Using the OPNAV Toolkit. We have worked to develop a flexible toolkit to leverage the power of updated techniques and algorithms to produce robust and reliable optical navigation results. It uses several humanreadable configuration files to aid in working with multiple datasets. These configuration files allow the user to change parameters easily, and do so outside the code itself. Users can specify distinct settings for different camera specifications, and can have different settings for each image in the dataset. We have written these as text files to allow the user to change settings in any text editor they choose. The OPNAV Toolkit presently supports two image distortion models: Brown distortion model and Owen distortion model. The Brown model is widely used within the computer vision community (e.g., OpenCV) and is the calibration framework selected for Orion. The Owen model has been widely used for the calibration of space imaging systems for planetary exploration and is helpful when processing legacy data (e.g., Cassini). The primary pipeline in the OPNAV Toolkit is designed for horizon-based OPNAV with respect to airless ellipsoidal bodies. Examples of such bodies include the Moon, Mercury, some of the moons of Saturn (e.g., Dione, Rhea, Tethys) and Jupiter (e.g., Ganymede, Callisto), and many of the minor planets (e.g., Ceres, Pluto). Following the procedure outlined by Christian in [6], this default toolkit workflow searches for the body’s lit horizon using a known illumination direction in the image. Once a pixel-level estimate of the edge is found, it is refined to subpixel accuracy using a moment-based approach. By focusing on airless bodies we avoid complications arising from the scattering of light in the atmosphere, such that the subpixel edge points directly correspond to the body’s hard boundary with space (i.e., the true limb). The resulting limb points in the image are used to estimate the position of the ellipsoidal celestial body within the camera frame using the SVD approach (instead of the Cholesky approach) from [6]. Each human-readable configuration file contains information on parameters that control the path through the default (horizon-based OPNAV) workflow. The configuration file header contains the following information for each parameter: a short name (e.g., GS for ‘gradient strength’), full name, data type (e.g., uint16), and a brief description. The accompanying documentation contains a full narrative description (with figures and equations) about each parameter and how they function. The analyst also uses a ‘master’ configuration file that specifies the paths to the images as well as the path to the other configuration files. The analyst specifies which images to process using the image data configuration file, as well as certain data that pertains to each image. The metadata required is consistent with what is available on NASA Planetary Data System (PDS) archives (when processing legacy images) and can easily be configured to accept only information that would be available for real-time use during an actual mission. A typical run of the OPNAV Toolkit default workflow produces a variety of outputs that are stored to the MAT2 RPI Space Imaging Workshop. Saratoga Springs, NY. 28-30 October 2019 1 LAB workspace, with some key summary information being written to a human-readable text output file. What follows is a summary of the usual toolkit outputs: • Subpixel limb edge coordinates, {ui, vi}i=1 • Body center estimate, {uc, vc} • Best-fit horizon ellipse (stored as coefficients of implicit equation) • Visualizations (if selected) • Distortion maps from camera models • Analytic OPNAV covariance matrix • Undistorted images from originals • Error flags Results. Our toolkit’s flexibility and ease-of-use permit navigation analysts to quickly perform OPNAV tasks on a variety of images under different settings. We have used this toolkit to generate OPNAV calculations from the Cassini mission, and the toolkit produces results that compare favorably with those calculated using existing techniques. As an illustrative example, consider a Cassini NAC image of the Saturnian moon Tethys (Fig. 1). The raw image is overlayed with outputs from the toolkit. In this example, we know nothing about the spacecraft’s inertial position or attitude — we only know that the camera’s field of view contains the moon Tethys and the apparent direction if incoming sunlight. The horizon is clearly found by the toolkit. We have also created a graphical user interface (GUI) Figure 1. Image of Tethys taken by Cassini Narrow Angle Camera (NAC) on July 21, 2007 (raw image N1563723519). The red dots (which appear as line since there are so many) is the subpixel estimate of the horizon. The green arrows indicate the assumed illumination direction and the blue box indicates the bounding box containing the body of interest. Figure 2. OPNAV Toolkit GUI. Raw image is overlayed with toolkit-produced metadata. to facilitate the easy visualization of data produced by the OPNAV Toolkit (see Fig. 2). Any image processed in a particular run may be easily selected using an automatically populated drop down menu. The user may also browse multiple images from the results, or view different bodies within the same image. The user may also choose which data (if any) to overlay on the image. These overlay options include: the best-fit horizon ellipse, the body’s centroid, the individual subpixel horizon points, a body bounding box, and the illumination direction. Additionally, a slider adjusts the contrast of the image. The overlays assist the analyst in understanding the output data and in checking for errors. The results of the OPNAV Toolkit GUI show the points, ellipse, and other data points on the undistorted image, but the data can be transformed back to the original image using basic transformation functions within the Toolkit.
[1]
Dean Brown,et al.
Decentering distortion of lenses
,
1966
.
[2]
S. Synnott,et al.
Voyager orbit determination at Jupiter
,
1983
.
[3]
J. Burns,et al.
Cassini Imaging Science: Instrument Characteristics And Anticipated Scientific Investigations At Saturn
,
2004
.
[4]
Adrian Kaehler,et al.
Learning opencv, 1st edition
,
2008
.
[5]
William M. Owen,et al.
Methods of optical navigation
,
2011
.
[6]
W. Marsden.
I and J
,
2012
.
[7]
Coralie D. Jackman,et al.
Optical Navigation Preparations for New Horizons Pluto Flyby
,
2012
.
[8]
John A. Christian,et al.
Geometric Calibration of the Orion Optical Navigation Camera using Star Field Images
,
2016
.
[9]
John A. Christian.
Accurate Planetary Limb Localization for Image-Based Spacecraft Navigation
,
2017
.
[10]
John A. Christian,et al.
Parametric Covariance Model for Horizon-Based Optical Navigation
,
2017
.