Fast tracking and accurate pose estimation of space flying target based on monocular vision

Autonomous rendezvous and robotic capturing of flying targets is widely used in many space shuttle missions and it is very crucial for on-orbit service. To perform this task, tracking and pose estimation of the flying targets is usually considered as one of the most important issues, needing to be addressed among the whole process. Taking account of the specificity of space environment such as lighting and the continuity of the rendezvous process, in this paper, we design a fast tracking and accurate pose estimation algorithm for cooperative luminaries (or retro-reflectors), to guide a safe and reliable capturing operation. Different from available target tracking or searching method, this paper defines a new comparability measure function for target appearance and utilizes the continuity of target moving to limit the target search in a predicted range of the image, which accelerates the search process. Meanwhile, the projective shape changes of each luminary due to rotation are also considered to help improve the accuracy of the target extraction. With the positions of multiple target spots obtained from the image, least square method is applied to adjust the spatial pose results iteratively, and finally accurate pose estimation is achieved. Experiments on the simulated space target, which consists of six LEDs, validate the proposed method.

[1]  Cao Xibin,et al.  Monocular Vision-based Two-stage Iterative Algorithm for Relative Position and Attitude Estimation of Docking Spacecraft , 2010 .

[2]  Zhenya Liu,et al.  Relative position and attitude estimation of spacecrafts based on dual quaternion for rendezvous and docking , 2013 .

[3]  Se-Young Oh,et al.  Robust visual object tracking with extended CAMShift in complex environments , 2011, IECON 2011 - 37th Annual Conference of the IEEE Industrial Electronics Society.

[4]  Aldo W. Morales,et al.  Computing the six degrees of freedom of light emitting diodes in a monocular image , 2012, 2012 IEEE International Conference on Consumer Electronics (ICCE).

[5]  Zhiguo Jiang,et al.  Vision-based pose estimation for cooperative space objects , 2013 .

[6]  Oliver Bimber,et al.  Fast and robust CAMShift tracking , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops.

[7]  Richard T. Howard,et al.  Advanced Video Guidance Sensor (AVGS) development testing , 2004, SPIE Defense + Commercial Sensing.

[8]  Hui Pan,et al.  High accurate estimation of relative pose of cooperative space targets based on measurement of monocular vision imaging , 2014 .

[9]  Gary Bradski,et al.  Computer Vision Face Tracking For Use in a Perceptual User Interface , 1998 .

[10]  V. Lepetit,et al.  EPnP: An Accurate O(n) Solution to the PnP Problem , 2009, International Journal of Computer Vision.

[11]  Gregory D. Hager,et al.  Fast and Globally Convergent Pose Estimation from Video Images , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[12]  Jesse S. Jin,et al.  Tracking Using CamShift Algorithm and Multiple Quantized Feature Spaces , 2004, VIP.

[13]  Stéphane Reynaud,et al.  Accurate and autonomous navigation for the ATV , 2007 .

[14]  Zongquan Deng,et al.  Fast vision-based pose estimation iterative algorithm , 2013 .

[15]  Wei Cheng,et al.  An iterative refinement method for pose estimation from planar target , 2015, 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO).