Spatiotemporal Representations for Visual Navigation

The study of visual navigation problems requires the integration of visual processes with motor control. Most essential in approaching this integration is the study of appropriate spatio-temporal representations which the system computes from the imagery and which serve as interfaces to all motor activities. Since representations resulting from exact quantitative reconstruction have turned out to be very hard to obtain, we argue here for the necessity of representations which can be computed easily, reliably and in real time and which recover only the information about the 3D world which is really needed in order to solve the navigational problems at hand. In this paper we introduce a number of such representations capturing aspects of 3D motion and scene structure which are used for the solution of navigational problems implemented in visual servo systems. In particular, the following three problems are addressed: (a) to change the robot's direction of motion towards a fixed direction, (b) to pursue a moving target while keeping a certain distance from the target, and (c) to follow a wall-like perimeter. The importance of the introduced representations lies in the following: They can be extracted using minimal visual information, in particular the sign of flow measurements or the the first order spatiotemporal derivatives of the image intensity function. In that sense they are direct representations needing no intermediate level of computation such as correspondence. They are global in the sense that they represent how three-dimensional information is globally encoded in them. Thus, they are robust representations since local errors do not affect them. Usually, from sequences of images, three-dimensional quantities such as motion and shape are computed and used as input to control processes. The representations discussed here are given directly as input to the control procedures, thus resulting in a real time solution.

[1]  Giulio Sandini,et al.  Divergent stereo for robot navigation: learning from bees , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[2]  Loong Fah Cheong,et al.  3D Motion and Shape Representations in Visual Servo Control , 1998, Int. J. Robotics Res..

[3]  Lee E. Weiss,et al.  Dynamic sensor-based control of robots with visual feedback , 1987, IEEE Journal on Robotics and Automation.

[4]  Muralidhara Subbarao Bounds on time-to-collision and rotational component from first-order derivatives of image flow Stony Brook, New York 11794-2350 , 1990 .

[5]  Yiannis Aloimonos,et al.  Active vision , 2004, International Journal of Computer Vision.

[6]  Y. Aloimonos,et al.  Direct Perception of Three-Dimensional Motion from Patterns of Visual Motion , 1995, Science.

[7]  Patrick Bouthemy,et al.  Derivation of qualitative information in motion analysis , 1990, Image Vis. Comput..

[8]  Steven B. Skaar,et al.  Camera-Space Manipulation , 1987 .

[9]  Patrick Rives,et al.  A new approach to visual servoing in robotics , 1992, IEEE Trans. Robotics Autom..

[10]  Dana H. Ballard,et al.  Principles of animate vision , 1992, CVGIP Image Underst..

[11]  Yiannis Aloimonos,et al.  Purposive and qualitative active vision , 1990, [1990] Proceedings. 10th International Conference on Pattern Recognition.

[12]  Yiannis Aloimonos,et al.  Obstacle Avoidance Using Flow Field Divergence , 1989, IEEE Trans. Pattern Anal. Mach. Intell..

[13]  Robin R. Murphy,et al.  Mobile Robot Docking Operations in a Manufacturing Environment: Progress in Visual Perceptual Strategies , 1989, Proceedings. IEEE/RSJ International Workshop on Intelligent Robots and Systems '. (IROS '89) 'The Autonomous Mobile Robots and Its Applications.

[14]  R. Bajcsy Active perception , 1988 .