Learning, positioning, and tracking visual appearance

The problem of vision-based robot positioning and tracking is addressed. A general learning algorithm is presented for determining the mapping between robot position and object appearance. The robot is first moved through several displacements with respect to its desired position, and a large set of object images is acquired. This image set is compressed using principal component analysis to obtain a four-dimensional subspace. Variations in object images due to robot displacements are represented as a compact parametrized manifold in the subspace. While positioning or tracking, errors in end-effector coordinates are efficiently computed from a single brightness image using the parametric manifold representation. The learning component enables accurate visual control without any prior hand-eye calibration. Several experiments have been conducted to demonstrate the practical feasibility of the proposed positioning/tracking approach and its relevance to industrial applications.<<ETX>>

[1]  James S. Albus,et al.  New Approach to Manipulator Control: The Cerebellar Model Articulation Controller (CMAC)1 , 1975 .

[2]  B. V. K. Vijaya Kumar,et al.  Efficient Calculation of Primary Images from a Set of Images , 1982, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  Erkki Oja,et al.  Subspace methods of pattern recognition , 1983 .

[4]  Sudhir P. Mudur,et al.  Mathematical Elements for Computer Graphics , 1985, Advances in Computer Graphics.

[5]  Michael Kuperstein,et al.  Adaptive visual-motor coordination in multijoint robots using parallel architecture , 1987, Proceedings. 1987 IEEE International Conference on Robotics and Automation.

[6]  Bartlett W. Mel MURPHY: A Robot that Learns by Doing , 1987, NIPS.

[7]  Lee E. Weiss,et al.  Dynamic sensor-based control of robots with visual feedback , 1987, IEEE Journal on Robotics and Automation.

[8]  W. Thomas Miller,et al.  Sensor-based control of robotic manipulators using a general learning algorithm , 1987, IEEE J. Robotics Autom..

[9]  Ren C. Luo,et al.  An adaptive robotic tracking system using optical flow , 1988, Proceedings. 1988 IEEE International Conference on Robotics and Automation.

[10]  W. Thomas Miller,et al.  Real-time application of neural networks for sensor-based control of robots with vision , 1989, IEEE Trans. Syst. Man Cybern..

[11]  F. Girosi,et al.  Networks for approximation and learning , 1990, Proc. IEEE.

[12]  Tsutomu Kimoto,et al.  Manipulator control with image-based visual servo , 1991, Proceedings. 1991 IEEE International Conference on Robotics and Automation.

[13]  C. S. George Lee,et al.  Weighted selection of image features for resolved rate visual feedback control , 1991, IEEE Trans. Robotics Autom..

[14]  P. K. Khosla,et al.  Adaptive Robotic Visual Tracking , 1991, 1991 American Control Conference.

[15]  Antti J. Koivo,et al.  Real-time vision feedback for servoing robotic manipulator with self-tuning controller , 1991, IEEE Trans. Syst. Man Cybern..

[16]  Peter K. Allen,et al.  Trajectory filtering and prediction for automated tracking and grasping of a moving object , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.

[17]  Hiroshi Murase,et al.  Learning and recognition of 3D objects from appearance , 1993, [1993] Proceedings IEEE Workshop on Qualitative Vision.