Robot-self-learning visual servoing algorithm using neural networks

A self-learning controller of a robot manipulator visual servoing system with a camera in hand to track a moving object is presented, where neural networks are involved in making a direct transition from visual to joint domain without requiring calibration. A technique, which uses monocular vision without explicitly estimating the visual depth, is also given in this paper. In this case, the visual sensory input is directly translated into joint accelerations. Simulation results show that this method can drive the static tracking error to zero quickly and keep good robustness and adaptability at the same time.

[1]  W. Thomas Miller,et al.  Real-time application of neural networks for sensor-based control of robots with vision , 1989, IEEE Trans. Syst. Man Cybern..

[2]  Q. M. Jonathan Wu,et al.  Modular neural-visual servoing using a neural-fuzzy decision network , 1997, Proceedings of International Conference on Robotics and Automation.

[3]  Peter I. Corke,et al.  Dynamic effects in visual closed-loop systems , 1996, IEEE Trans. Robotics Autom..

[4]  F.C.A. Groen,et al.  Learning strategies for a vision based neural controller for a robot arm , 1990, Proceedings of the IEEE International Workshop on Intelligent Motion Control.

[5]  Nikolaos Papanikolopoulos,et al.  Adaptive robotic visual tracking: theory and experiments , 1993, IEEE Trans. Autom. Control..

[6]  Lee E. Weiss,et al.  Dynamic sensor-based control of robots with visual feedback , 1987, IEEE Journal on Robotics and Automation.

[7]  J. Stoer,et al.  Introduction to Numerical Analysis , 2002 .

[8]  P. K. Khosla,et al.  Adaptive Robotic Visual Tracking , 1991, 1991 American Control Conference.

[9]  Klaus Schulten,et al.  Topology-conserving maps for learning visuo-motor-coordination , 1989, Neural Networks.