Featureless Visual Navigation using Optical Flow of Omnidirectional Image Sequence

In this paper, we develop a featureless visual navigation al- gorithm for the autonomous robot mounting a spherical imaging system. The spherical image is the normalised image for omnidirectional images. Di!erences of the depth from the camera to the objects yield the dis- parity on the image. Using the disparity of optical flow vectors on the spherical image, we construct a method to compute the direction for navigation. For the computation of the optical flow vectors, we develop the Horn-Schunck method on the spherical images. In this paper, we develop a visual navigation method for the autonomous robot using optical flow on the spherical image. The spherical image is the normalised image for omnidirectional images. The spherical image expression of the omnidi- rectional images provides the imaging-system-free expression of omnidirectional images. The view from a compound eye of insects and birds yield a spherical image. Animals which observe spherical images decide the navigation direction from a sequence of spherical images. Especially, the compound-eye of insects detects moving objects in the environment and egomotion from optical flow. Therefore, we construct an algorithm to compute the free space and the navi- gation direction from the sequence of optical flow fields of the spherical images. The use of optical flow fields allows the robot to navigate without any features and landmarks in the workspace (17-19). In a real environment, the payload of a mobile robot, for example, power supply, capacity of input devices and computing power, is restricted. Therefore, mobile robots are required to have simple mechanisms and devices (11,17). The vision sensors provide low-cost devices that is easily mounted on mobile robots. As same as the pinhole camera system, geometrical features such as lines and planes in the environment are fundamental cues to the configuration of obsta- cles in the three-dimensional space. If we adopt these traditional strategies, the robot is required to detect the free space as the dual of the space occupied by obstacles. Furthermore, if the map of workspace is used for the navigation, the robot is required to prepare geometrical transformation method to transform the omnidirectional view to the map. These two methodologies require special

[1]  H. Krapp,et al.  Robustness of the tuning of fly visual interneurons to rotatory optic flow. , 2003, Journal of neurophysiology.

[2]  A. Makadia,et al.  Image processing in catadioptric planes: spatiotemporal derivatives and optical flow computation , 2002, Proceedings of the IEEE Workshop on Omnidirectional Vision 2002. Held in conjunction with ECCV'02.

[3]  Sjoerd van der Zwaan An Insect Inspired Visual Sensor for the Autonomous Navigation of a Mobile Robot , 2007 .

[4]  Avinash C. Kak,et al.  Vision for Mobile Robot Navigation: A Survey , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[5]  Kostas Daniilidis,et al.  Catadioptric Projective Geometry , 2001, International Journal of Computer Vision.

[6]  Giulio Sandini,et al.  Uncalibrated obstacle detection using normal flow , 2005, Machine Vision and Applications.

[7]  Berthold K. P. Horn,et al.  Determining Optical Flow , 1981, Other Conferences.

[8]  J. Gaspar,et al.  Omni-directional vision for robot navigation , 2000, Proceedings IEEE Workshop on Omnidirectional Vision (Cat. No.PR00704).

[9]  Pascal Frossard,et al.  Multiresolution motion estimation for omnidirectional images , 2005, 2005 13th European Signal Processing Conference.

[10]  Shree K. Nayar,et al.  Catadioptric omnidirectional camera , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[11]  Roland Siegwart,et al.  Robot Navigation by Panoramic Vision and Attention Guided Fetaures , 2006, 18th International Conference on Pattern Recognition (ICPR'06).

[12]  Shree K. Nayar,et al.  A Theory of Single-Viewpoint Catadioptric Image Formation , 1999, International Journal of Computer Vision.

[13]  C. Laugier,et al.  Real-time moving obstacle detection using optical flow models , 2006, 2006 IEEE Intelligent Vehicles Symposium.

[14]  Geoffrey L. Barrows,et al.  Flying insect inspired vision for autonomous aerial robot maneuvers in near-earth environments , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[15]  P. J. Sobey Active navigation with a monocular robot , 1994, Biological Cybernetics.

[16]  Holger G. Krapp,et al.  Insect-Inspired Estimation of Egomotion , 2004, Neural Computation.

[17]  Andrew Vardy,et al.  Biologically plausible visual homing methods based on optical flow techniques , 2005, Connect. Sci..

[18]  Atsushi Imiya,et al.  Featureless robot navigation using optical flow , 2005, Connect. Sci..

[19]  Andrew Vardy,et al.  Local visual homing by matched-filter descent in image distances , 2006, Biological Cybernetics.

[20]  Joao P. Barreto,et al.  Unifying Image Plane Liftings for Central Catadioptric and Dioptric Cameras , 2006 .

[21]  José Santos-Victor,et al.  Vision-based navigation and environmental representations with an omnidirectional camera , 2000, IEEE Trans. Robotics Autom..

[22]  Hanspeter A. Mallot,et al.  Biomimetic robot navigation , 2000, Robotics Auton. Syst..

[23]  Eduardo Bayro-Corrochano,et al.  Omnidirectional Vision and Invariant Theory for Robot Navigation Using Conformal Geometric Algebra , 2006, 18th International Conference on Pattern Recognition (ICPR'06).