Evolving lifelong learners for a visually guided arm

The primary objective was to develop a fast learning dynamic controller for uncalibrated visual guidance of a robotic arm. A combination of neural networks learning with an evolutionary method allowed for the study of the interaction of the two techniques in a non-trivial real world application. The neural network controller learned the relationship between changes to the image coordinates, in two cameras, of the arm's end effector due to observed movements, and the motor commands that caused these movements, during its lifetime. This eliminated the need for calibration and made the controller robust to repositioning of the equipment. Many parameters of the controller were evolved by an evolutionary algorithm but not the neural network weights. The aim was to produce a neural network that could rapidly learn the geometry of the arm space using the backpropagation (BP) weight training rule, rather than evolving the weights directly. This is the first time that such a combination of evolutionary neural computing research techniques have been used in the context of a robotic manipulator application. To reduce the time taken for the evolution to within practical limits a minimal simulation approach was used to evolve the learning parameters and the resulting networks were tested both on the simulator and on a physical robot arm in the real world.