Quarterly Progress Report on Contract N00014-93-1-1235 for May 1994 - July 1994 (Yale University, New Haven, Connecticut),

Abstract : In this quarter, we extended our visual servoing capabilities to include operations such as visual alignment along an axis, and full six-degree-of-freedom relative positioning. We demonstrated th use of alignment by programming the system to place a screwdriver onto a screw. As with all other visual control operations, these are calibration-insensitive. We also demonstrated vision based robot control. We have developed a small piloting program that permits a user to guide th robot using visual tracking. The user can point at objects such as a door or window, or simple features such as corners or other areas with high contrast, and instruct the robot to home on those features while performing obstacle avoidance. We have also demonstrated some early results on selecting features to track automatically. We have been running experiments to test our object-recognition algorithms. We generated images by dropping two-dimensional objects into random cluttered arrangements on a table top. Fifty of the images contained the target, almost always concluded; 25 did not contain it. We ran the algorithm on all 75 images. When the object was present, the algorithm produced an averagess of 2.1(3 feasible interpretations, which included the actual object whenever it was present and le than 90% occluded. When the object was absent, the algorithm produced less than one feasible interpretation on average. The goal of the algorithm is to filter the edge sets from the raw image so that a detailed matcher has to be called only once or twice per image. So far, it appears be completely successful. The bogus interpretations the algorithm finds can be quickly rejected by slightly more sophisticated matching algorithms. The results will be reported in forthcoming paper Tagare and McDermott. (KAR) P. 1