Robot end-effector 2D visual positioning using neural networks

A visual positioning controller of robot manipulator system with a camera in hand is presented in this paper, where a feedforward neural network is involved to drive the end-effector of manipulator to the desired position instead of the proportional controller. In this case, the visual sensory input is directly translated to world actuator domain. Simulation results show that this method can drive the static positioning error to zero quickly and keep good dynamic response at the same time compared with proportional control law.

[1]  W. Thomas Miller,et al.  Real-time application of neural networks for sensor-based control of robots with vision , 1989, IEEE Trans. Syst. Man Cybern..

[2]  Nikolaos Papanikolopoulos,et al.  Adaptive robotic visual tracking: theory and experiments , 1993, IEEE Trans. Autom. Control..

[3]  Q. M. Jonathan Wu,et al.  Modular neural-visual servoing using a neural-fuzzy decision network , 1997, Proceedings of International Conference on Robotics and Automation.

[4]  Klaus Schulten,et al.  Topology-conserving maps for learning visuo-motor-coordination , 1989, Neural Networks.

[5]  F.C.A. Groen,et al.  Learning strategies for a vision based neural controller for a robot arm , 1990, Proceedings of the IEEE International Workshop on Intelligent Motion Control.

[6]  Lee E. Weiss,et al.  Dynamic sensor-based control of robots with visual feedback , 1987, IEEE Journal on Robotics and Automation.

[7]  P. K. Khosla,et al.  Adaptive Robotic Visual Tracking , 1991, 1991 American Control Conference.

[8]  Peter I. Corke,et al.  Dynamic effects in visual closed-loop systems , 1996, IEEE Trans. Robotics Autom..