Robotic applications of VAM-based invariant representation for active vision

Active vision refers to a purposeful change in the camera setup to aid the processing of visual information. An important issue in using active vision is the need to represent the 3D environment in a manner that is invariant to changing camera configurations. Conventional methods require precise knowledge of various camera parameters in order to build this representation. However, these parameters are prone to calibration errors. This motivates us to explore a neural network based approach using Vector Associative Map to learn the invariant representation of 3D point targets for active vision. An efficient learning scheme is developed that is suitable for robotic implementation. The representation thus learned is also independent of the intrinsic parameters of the imaging system, making it immune to systematic calibration errors. To evaluate the effectiveness of this scheme, computer simulations were first performed using a detailed model of the University of Illinois Active Vision System (UIAVS). This is followed by an experimental verification on the actual UIAVS. Several robotic applications are then explored that utilize the invariance property of the learned representation. These applications include motion detection, active vision based robot control, robot motion planning, and saccade sequence planning.