On-line Planar Area Segmentation from Sequence of Monocular Monochrome Images for Visual Navigation of Autonomous Robot

We introduce an on-line segmentation of a planar area from a sequence of images for visual navigation of a robot. We assume that the robot moves autonomously in a man-made environment without any stored map in the memory or any markers in the environment. Since the robot moves in a man-made environment, we can assume that the robot workspace is a collection of spatial plane segments. The robot is needed to separate a ground plane from an image and/or images captured by imaging system mounted on the robot. The ground plane defines a collision-free space for navigation. We develop a strategy for computing the navigation direction using a hierarchical expression of plane segments in the workspace. The robot is required to extract a spatial hierarchy of plane segments from images. We propose an algorithm for plane segmentation using an optical flow field captured by an uncalibrated moving camera.

[1]  Atsushi Imiya,et al.  Featureless robot navigation using optical flow , 2005, Connect. Sci..

[2]  Thomas Brox,et al.  Variational Motion Segmentation with Level Sets , 2006, ECCV.

[3]  Allen Y. Yang,et al.  Segmentation of a piece-wise planar scene from perspective images , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[4]  Atsushi Imiya,et al.  Dominant plane detection from optical flow for robot navigation , 2006, Pattern Recognit. Lett..

[5]  Atsushi Imiya,et al.  Motion Analysis by Random Sampling and Voting Process , 1999, Comput. Vis. Image Underst..

[6]  Atsushi Imiya,et al.  Planar motion detection by randomized triangle matching , 1997, Pattern Recognit. Lett..

[7]  Hiroshi Murase,et al.  Visual learning and recognition of 3-d objects from appearance , 2005, International Journal of Computer Vision.

[8]  Edward H. Adelson,et al.  Representing moving images with layers , 1994, IEEE Trans. Image Process..

[9]  Yair Weiss,et al.  Smoothness in layers: Motion segmentation using nonparametric mixture estimation , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[10]  Ubbo Visser,et al.  Egocentric qualitative spatial knowledge representation for physical robots , 2004, Robotics Auton. Syst..

[11]  Hakil Kim,et al.  Layered ground floor detection for vision-based mobile robot navigation , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[12]  James L. Crowley,et al.  Appearance based processes for visual navigation , 1997 .

[13]  Illah R. Nourbakhsh,et al.  Appearance-based place recognition for topological localization , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[14]  Benjamin Kuipers,et al.  A robot exploration and mapping strategy based on a semantic hierarchy of spatial representations , 1991, Robotics Auton. Syst..

[15]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[16]  Simon K. Rushton,et al.  Optic Flow and Beyond , 2004 .

[17]  Jiri Matas,et al.  Two-view geometry estimation unaffected by a dominant plane , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).