Low-Level Visual Homing

We present a variant of the snapshot model [1] for insect visual homing. In this model a snapshot image is taken by an agent at the goal position. The disparity between current and snapshot images is subsequently used to guide the agent’s return. A matrix of local low-level processing elements is applied here to compute this disparity and transform it into a motion vector. This scheme contrasts with other variants of the snapshot model which operate on one-dimensional images, generally taken as views from a synthetic or simplified real world setting. Our approach operates directly on two-dimensional images of the real world. Although this system is not a model of any known neural structure, it hopes to offer more biological plausibility than competing techniques because the processing applied is low-level, and because the information processed appears to be of the same sort of information that is processed by insects. We present a comparison of results obtained on a set of real-world images.

[1]  T. Collett,et al.  Multiple stored views and landmark guidance in ants , 1998, Nature.

[2]  Bernhard Schölkopf,et al.  Where did I take that snapshot? Scene-based homing by image matching , 1998, Biological Cybernetics.

[3]  R. Pfeifer,et al.  A mobile robot employing insect strategies for navigation , 2000, Robotics Auton. Syst..

[4]  Edward M. Riseman,et al.  Image-based homing , 1992 .

[5]  Emanuele Trucco,et al.  Introductory techniques for 3-D computer vision , 1998 .

[6]  T. S. Collett,et al.  Landmark learning in bees , 1983, Journal of comparative physiology.

[7]  Ralf Möller,et al.  Insect visual homing strategies in a robot with analog processing , 2000, Biological Cybernetics.

[8]  T. S. Collett,et al.  Biological compasses and the coordinate frame of landmark memories in honeybees , 1994, Nature.

[9]  J. Koenderink,et al.  Facts on optic flow , 1987, Biological Cybernetics.

[10]  Benjamin Kuipers,et al.  A robot exploration and mapping strategy based on a semantic hierarchy of spatial representations , 1991, Robotics Auton. Syst..

[11]  Jochen Zeil,et al.  Catchment areas of panoramic snapshots in outdoor scenes. , 2003, Journal of the Optical Society of America. A, Optics, image science, and vision.

[12]  J. Zeil,et al.  Structure and function of learning flights in bees and wasps , 1996 .

[13]  Dimitrios Lambrinos,et al.  A neural model of landmark navigation in insects , 1999, Neurocomputing.

[14]  Andrew Vardy,et al.  Biologically plausible visual homing methods based on optical flow techniques , 2005, Connect. Sci..

[15]  Dimitrios Lambrinos,et al.  Insect Strategies of Visual Homing in Mobile Robots , 1998 .

[16]  Svetha Venkatesh,et al.  Insect-Inspired Robotic Homing , 1999, Adapt. Behav..

[17]  D. Hubel,et al.  Receptive fields, binocular interaction and functional architecture in the cat's visual cortex , 1962, The Journal of physiology.

[18]  Thomas Röfer,et al.  Controlling a Wheelchair with Image-based Homing , 1997 .