A low-cost test-bed for real-time landmark tracking

A low-cost vehicle test-bed system was developed to iteratively test, refine and demonstrate navigation algorithms before attempting to transfer the algorithms to more advanced rover prototypes. The platform used here was a modified radio controlled (RC) car. A microcontroller board and onboard laptop computer allow for either autonomous or remote operation via a computer workstation. The sensors onboard the vehicle represent the types currently used on NASA-JPL rover prototypes. For dead-reckoning navigation, optical wheel encoders, a single axis gyroscope, and 2-axis accelerometer were used. An ultrasound ranger is available to calculate distance as a substitute for the stereo vision systems presently used on rovers. The prototype also carries a small laptop computer with a USB camera and wireless transmitter to send real time video to an off-board computer. A real-time user interface was implemented that combines an automatic image feature selector, tracking parameter controls, streaming video viewer, and user generated or autonomous driving commands. Using the test-bed, real-time landmark tracking was demonstrated by autonomously driving the vehicle through the JPL Mars yard. The algorithms tracked rocks as waypoints. This generated coordinates calculating relative motion and visually servoing to science targets. A limitation for the current system is serial computing−each additional landmark is tracked in order−but since each landmark is tracked independently, if transferred to appropriate parallel hardware, adding targets would not significantly diminish system speed.

[1]  Pietro Perona,et al.  Common-Frame Model for Object Recognition , 2004, NIPS.

[2]  T. Lindeberg,et al.  Scale-Space Theory : A Basic Tool for Analysing Structures at Different Scales , 1994 .

[3]  Larry H. Matthies,et al.  Visual odometry on the Mars Exploration Rovers , 2005, 2005 IEEE International Conference on Systems, Man and Cybernetics.

[4]  Jay C. Hanan,et al.  Position estimation and driving of an autonomous vehicle by monocular vision , 2007, SPIE Defense + Commercial Sensing.

[5]  James L. Crowley,et al.  A Representation for Shape Based on Peaks and Ridges in the Difference of Low-Pass Transform , 1984, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[7]  M. Deans,et al.  Visual target tracking for rover-based planetary exploration , 2004, 2004 IEEE Aerospace Conference Proceedings (IEEE Cat. No.04TH8720).

[8]  Martin A. Fischler,et al.  The Representation and Matching of Pictorial Structures , 1973, IEEE Transactions on Computers.

[9]  David G. Lowe,et al.  Perceptual Organization and Visual Recognition , 2012 .

[10]  Clark F. Olson,et al.  Rover navigation using stereo ego-motion , 2003, Robotics Auton. Syst..

[11]  Donald Geman,et al.  Coarse-to-Fine Face Detection , 2004, International Journal of Computer Vision.

[12]  Pietro Perona,et al.  Object class recognition by unsupervised scale-invariant learning , 2003, 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings..

[13]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[14]  M. Klimesh,et al.  Mars Exploration Rover engineering cameras , 2003 .

[15]  Jean-Yves Bouguet,et al.  Camera calibration toolbox for matlab , 2001 .

[16]  Pietro Perona,et al.  Evaluation of Features Detectors and Descriptors based on 3D Objects , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[17]  Paul Beaudet,et al.  Rotationally invariant image operators , 1978 .

[18]  A. Diaz-Calderon,et al.  Target tracking, approach, and camera handoff for automated instrument placement , 2005, 2005 IEEE Aerospace Conference.