Vision and Inertial Sensor Cooperation Using Gravity as a Vertical Reference

This paper explores the combination of inertial sensor data with vision. Visual and inertial sensing are two sensory modalities that can be explored to give robust solutions on image segmentation and recovery of 3D structure from images, increasing the capabilities of autonomous robots and enlarging the application potential of vision systems. In biological systems, the information provided by the vestibular system is fused at a very early processing stage with vision, playing a key role on the execution of visual movements such as gaze holding and tracking, and the visual cues aid the spatial orientation and body equilibrium. In this paper, we set a framework for using inertial sensor data in vision systems, and describe some results obtained. The unit sphere projection camera model is used, providing a simple model for inertial data integration. Using the vertical reference provided by the inertial sensors, the image horizon line can be determined. Using just one vanishing point and the vertical, we can recover the camera's focal distance and provide an external bearing for the system's navigation frame of reference. Knowing the geometry of a stereo rig and its pose from the inertial sensors, the collineations of level planes can be recovered, providing enough restrictions to segment and reconstruct vertical features and leveled planar patches.

[1]  K. Kanatani 3D recovery of polyhedra by rectangularity heuristics , 1989, International Workshop on Industrial Applications of Machine Intelligence and Vision,.

[2]  Khoi Nguyen,et al.  Computer-vision-based registration techniques for augmented reality , 1996, Other Conferences.

[3]  Beatrice Brillault-O'Mahony,et al.  New method for vanishing point detection , 1991, CVGIP Image Underst..

[4]  Olivier Faugeras,et al.  Computation of inertial information on a Robot , 1991 .

[5]  Satyan R. Coorg,et al.  Pose imagery and automated three-dimensional modeling of urban environments , 1998 .

[6]  Meng-xiang Li Camera Calibration of the KTH Head-Eye System , 1994, eccv 1994.

[7]  David R. Nadeau,et al.  VRML 2.0 Sourcebook , 1995 .

[8]  Mengxiang Li Camera Calibration of a Head-Eye System for Active Vision , 1994, ECCV.

[9]  Olivier Faugeras,et al.  Cooperation of the inertial and visual systems , 1990 .

[10]  Giulio Sandini,et al.  Visuo-inertial stabilization in space-variant binocular systems , 2000, Robotics Auton. Syst..

[11]  Jorge Dias,et al.  Inertial Sensed Ego-motion for 3D Vision , 2004, J. Field Robotics.

[12]  R. P. G. Collinson,et al.  Introduction to avionics , 1996 .

[13]  Paul G Savage,et al.  Strapdown System Algorithms , 1984 .

[14]  Helder Araújo,et al.  Simulating pursuit with machine experiments with robots and artificial vision , 1998, IEEE Trans. Robotics Autom..

[15]  Jorge Nuno de Almeida e Sousa Lobo Inertial sensor data integration in computer vision systems , 2002 .

[16]  Kenichi Kanatani,et al.  Geometric computation for machine vision , 1993 .

[17]  Olivier D. Faugeras,et al.  Autonomous navigation of a mobile robot using inertial and visual cues , 1993, Proceedings of 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '93).

[18]  D. Whitteridge Movements of the eyes R. H. S. Carpenter, Pion Ltd, London (1977), 420 pp., $27.00 , 1979, Neuroscience.

[19]  Thierry Viéville,et al.  Computation of ego-motion and structure from visual and inertial sensors using the vertical cue , 1993, 1993 (4th) International Conference on Computer Vision.

[20]  Jorge Dias,et al.  Ground plane detection using visual and inertial data fusion , 1998, Proceedings. 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems. Innovations in Theory, Practice and Applications (Cat. No.98CH36190).

[21]  Jorge Lobo,et al.  Camera-Inertial Sensor modelling and alignment for Visual Navigation , 2003 .

[22]  Jorge Lobo,et al.  Vertical world feature detection and mapping using stereo vision and accelerometers , 2001 .

[23]  Joseph O'Rourke,et al.  Computational Geometry in C. , 1995 .

[24]  Bir Bhanu,et al.  Inertial navigation sensor integrated motion analysis for obstacle detection , 1990, Proceedings., IEEE International Conference on Robotics and Automation.

[25]  M. Shuster The kinematic equation for the rotation vector , 1993 .

[26]  Jorge Dias,et al.  World feature detection and mapping using stereovision and inertial sensors , 2003, Robotics Auton. Syst..

[27]  B. Caprile,et al.  Using vanishing points for camera calibration , 1990, International Journal of Computer Vision.

[28]  Stephen M. Smith,et al.  SUSAN—A New Approach to Low Level Image Processing , 1997, International Journal of Computer Vision.

[29]  Noboru Ohnishi,et al.  The recovery of object shape and camera motion using a sensing system with a video camera and a gyro sensor , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[30]  Steven A. Shafer,et al.  What is the center of the image? , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[31]  Thierry Viéville A Few Steps Towards 3D Active Vision , 1997 .

[32]  Giulio Sandini,et al.  Oculo-motor stabilization reflexes: integration of inertial and visual information , 1998, Neural Networks.

[33]  Fred H. Previc,et al.  Spatial Orientation in Flight , 1993 .

[34]  Andrew Zisserman,et al.  Multiple view geometry in computer visiond , 2001 .

[35]  Lino Marques,et al.  Sensors for mobile robot navigation , 1998 .

[36]  M. Caruso,et al.  A New Perspective on Magnetic Field Sensing , 1999 .

[37]  Berthold K. P. Horn,et al.  Closed-form solution of absolute orientation using unit quaternions , 1987 .

[38]  Jorge Dias,et al.  Segmentation of dense depth maps using inertial data a real-time implementation , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[39]  Wen-Hsiang Tsai,et al.  Camera Calibration by Vanishing Lines for 3-D Computer Vision , 1991, IEEE Trans. Pattern Anal. Mach. Intell..

[40]  Mongi A. Abidi,et al.  Data fusion in robotics and machine intelligence , 1992 .

[41]  Roberto Horowitz,et al.  Integrated micro-electro-mechanical sensor development for inertial applications , 1998, IEEE 1998 Position Location and Navigation Symposium (Cat. No.98CH36153).

[42]  Ryo Kurazume,et al.  Development of image stabilization system for remote operation of walking robots , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[43]  Jorge Lobo,et al.  Fusing of image and inertial sensing for camera calibration , 2001, Conference Documentation International Conference on Multisensor Fusion and Integration for Intelligent Systems. MFI 2001 (Cat. No.01TH8590).

[44]  James Park,et al.  The Brain's Sense of Movement , 2003, The Yale Journal of Biology and Medicine.

[45]  Zhengyou Zhang,et al.  Flexible camera calibration by viewing a plane from unknown orientations , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[46]  Noboru Ohnishi,et al.  Object shape and camera motion recovery using sensor fusion of a video camera and a gyro sensor , 2000, Inf. Fusion.

[47]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[48]  Arch C. Luther Video camera technology , 1998, Digital audio and video series.

[49]  Ronald Azuma,et al.  Hybrid inertial and vision tracking for augmented reality registration , 1999, Proceedings IEEE Virtual Reality (Cat. No. 99CB36316).

[50]  Jorge Lobo,et al.  Integration of inertial information with vision towards robot autonomy , 1997, ISIE '97 Proceeding of the IEEE International Symposium on Industrial Electronics.

[51]  Ernst D. Dickmanns,et al.  Vehicles Capable of Dynamic Vision: A New Breed of Technical Beings? , 1998, Artif. Intell..