Autonomous satellite rendezvous and docking using lidar and model based vision

Servicing satellites on-orbit requires ability to rendezvous and dock by an unmanned spacecraft with no or minimum human input. Novel imaging sensors and computer vision technologies are required to detect a target spacecraft at a distance of several kilometers and to guide the approaching spacecraft to contact. Current optical systems operate at much shorter distances, provide only bearing and range towards the target, or rely on visual targets. Emergence of novel LIDAR technologies and computer vision algorithms will lead to a new generation of rendezvous and docking systems in the near future. Such systems will be capable of autonomously detecting a target satellite at a distance of a few kilometers, estimating its bearing, range and relative orientation under virtually any illumination, and in any satellite pose. At MDA Space Missions we have developed a proof-of-concept vision system that uses a scanning LIDAR to estimate pose of a known satellite. First, the vision system detects a target satellite, and estimates its bearing and range. Next, the system estimates the full pose of the satellite using a 3D model. Finally, the system tracks satellite pose with high accuracy and update rate. Estimated pose provides information where the docking port is located even if the port is not visible and enables selecting more efficient flight trajectory. The proof-of-concept vision system has been integrated with a commercial time-of-flight LIDAR and tested using a moving scaled satellite replica in the MDA Vision Testbed.

[1]  Gerhard Roth,et al.  Pose Determination and Tracking for Autonomous Satellite Capture , 2001 .

[2]  Michael E. Polites,et al.  An Assessment of the Technology of Automated Rendezvous and Capture in Space , 1998 .

[3]  Martial Hebert,et al.  Efficient multiple model recognition in cluttered 3-D scenes , 1998, Proceedings. 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No.98CB36231).

[4]  Takeo Kanade,et al.  Real-time 3-D pose estimation using a high-speed range sensor , 1993, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[5]  Michael A. Greenspan,et al.  A nearest neighbor method for efficient ICP , 2001, Proceedings Third International Conference on 3-D Digital Imaging and Modeling.

[6]  Chin Seng Chua,et al.  Point Signatures: A New Representation for 3D Object Recognition , 1997, International Journal of Computer Vision.

[7]  Samuel Hollander Autonomous space robotics - Enabling technologies for advanced space platforms , 2000 .

[8]  Richard T. Howard,et al.  The Video Guidance Sensor- A Flight Proven Technology , 1999 .

[9]  Michael Greenspan,et al.  Efficient tracking with the Bounded Hough Transform , 2004, CVPR 2004.

[10]  Isao Kawano,et al.  Development of the rendezvous radar for the Engineering Test Satellite VII , 1998 .

[11]  Michael A. Greenspan Geometric Probing of Dense Range Data , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[12]  François Blais Review of 20 years of range sensor development , 2004, J. Electronic Imaging.

[13]  Wigbert Fehse,et al.  Automated Rendezvous and Docking of Spacecraft , 2003 .

[14]  Francois Blais,et al.  The Neptec Three-Dimensional Laser Camera System: From Space Mission STS-105 to Terrestrial Applications , 2004 .

[15]  Heather Hinkel,et al.  Laser-Based Relative Navigation and Guidance for Space Shuttle Proximity Operations , 2003 .

[16]  Paul J. Besl,et al.  A Method for Registration of 3-D Shapes , 1992, IEEE Trans. Pattern Anal. Mach. Intell..