An image-based visual-motion-cue for autonomous navigation

This paper presents a novel time-based visual motion cue called the Hybrid Visual Threat Cue (HVTC) that provides some measure for a change in relative range as well as absolute clearances, between a 3D surface and a moving observer. It is shown that the HVTC is a linear combination of Time-To-Contact (TTC), visual looming and the Visual Threat Cue (VTC). The visual field associated with the HVTC can be used to demarcate the regions around a moving observer into safe and danger zones of varying degree, which may be suitable for autonomous navigation tasks. The HVTC is independent of the 3D environment and needs almost no a-priori information about it. It is rotation independent, and is measured in ~time/sup -1/\ units Several approaches to extract the HVTC, are suggested. Also a practical method to extract it from a sequence of images of a 3D textured surface obtained by a visually fixating, fixed-focus monocular camera in motion is presented. This approach of extracting the HVTC is independent of the type of 3D surface texture and needs no optical flow information, 3D reconstruction, segmentation, feature tracking.

[1]  Martial Hebert,et al.  Vision and navigation for the Carnegie-Mellon Navlab , 1988 .

[2]  Muralidhara Subbarao Direct Recovery of Depth-map I: Differential Methods , 1987 .

[3]  David N. Lee,et al.  A Theory of Visual Control of Braking Based on Information about Time-to-Collision , 1976, Perception.

[4]  Giulio Sandini,et al.  On the Advantages of Polar and Log-Polar Mapping for Direct Estimation of Time-To-Impact from Optical Flow , 1993, IEEE Trans. Pattern Anal. Mach. Intell..

[5]  Giulio Sandini,et al.  Divergent stereo for robot navigation: learning from bees , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[6]  Daniel Raviv,et al.  Novel active-vision-based visual-threat-cue for autonomous navigation tasks , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[7]  J. Gibson,et al.  A theoretical field-analysis of automobile-driving , 1938 .

[8]  Andrew Blake,et al.  Surface Orientation and Time to Contact from Image Divergence and Deformation , 1992, ECCV.

[9]  Daniel Raviv,et al.  Scale-Space-Based Visual-Motion-Cue for Autonomous Navigation | NIST , 1996 .

[10]  Alex Pentland,et al.  A New Sense for Depth of Field , 1985, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  Daniel Raviv,et al.  Novel Active-Vision-Based Motion Cues for Local Navigation | NIST , 1996 .

[12]  Yiannis Aloimonos,et al.  Obstacle Avoidance Using Flow Field Divergence , 1989, IEEE Trans. Pattern Anal. Mach. Intell..

[13]  Daniel Raviv,et al.  An Image-Based Texture-Independent Visual Motion Cue for Autonomous Navigation | NIST , 1995 .

[14]  Daniel Raviv,et al.  A Quantitative Approach to Looming , 1992 .