Pose estimation and relative orbit determination of a nearby target microsatellite using passive imagery

A method of estimating the relative position and orientation of a known target satellite is presented, using only passive imagery. Such a method is intended as a prelude to a system required in future autonomous satellite docking missions. Using a single monocular image, and utilising knowledge of the target spacecraft, estimation of the target's six relative rotation and translation parameters with respect to the camera are found. Pose estimation is divided into modular sections. Each frame is processed to detect the major lines in the image, and correspondence information between detected lines and a-priori target information is estimated, resulting in a list of line-to-model correspondences. This correspondence information is used to estimate the pose of the target required to produce such a correspondence list. Multiple possible pose estimates are generated and tested, where each estimate contains the three rotation and translation parameters. The best estimates go through to the least-squares minimisation phase, which reduces estimation error and provides statistical information for multi-frame filtering. The final estimate vector and covariance matrix is the end result for each frame. Estimates of the target location over time allow the relative orbit parameters of the target to be estimated. Location estimates are filtered to fit an orbit model based on Hill's Equations, and statistical information gathered with each estimate is including in the filter process when estimating the orbit parameters. These orbit parameters allow prediction of the target location with time, which will enable mission planning and safety analysis of potential orbit manoeuvres in close proximity to the target. Testing is carried out by a detailed simulation system, which renders accurate images of the target satellite given the true pose of the target with respect to the inertial reference frame. The rendering software used takes into account lighting conditions, reflections, shadowing, specularity, and other considerations, and further post-processing is involved to produce a realistic image. Target position over time is modelled on orbit dynamics with respect to a defined inertial frame. Transformation between inertial, target, and camera frames of reference are dealt with, to transform a rotating target in the inertial frame to the apparent rotation in the camera frame.