Vision-guided self-alignment and manipulation in a walking robot

One of the robots under development at the NASAs Jet Propulsion Laboratory (JPL) is the limbed excursion mechanical utility robot, or LEMUR. Several of the tasks slated for this robot require computer vision, as a system, to interface with the other systems in the robot, such as walking, body pose adjustment, and manipulation. This paper describes the vision algorithms used in several tasks, as well as the vision-guided manipulation algorithms developed to mitigate mismatches between the vision system and the limbs used for manipulation. Two system-level tasks are described, one involving a two meter walk culminating in a bolt-fastening task and one involving a vision-guided alignment ending with the robot mating with a docking station