Towards an Embedded Visuo-Inertial Smart Sensor

In the neurological system of primates, changes in posture are detected by the central nervous system through a vestibular process. This process, located in the inner ear, coordinates several system outputs to maintain stable balance, visual gaze, and autonomic control in response to changes in posture. Consequently the vestibular data is merged to other sensing data like touch, vision, .... The visuo-inertial merging is crucial for several tasks like navigation, depth estimation, stabilization. This paper proposes a “primate-inspired” sensing hardware, based on a CMOS imaging and an artificial vestibular system. The whole sensor can be considered like a smart embedded sensor where one of the most original aspects of this approach is the use of a System On Chip implemented in a FPGA to manage the whole system. The sensing device is designed around a 4 million pixels CMOS imager and the artificial vestibular set is composed of three linear accelerometers and three gyroscopes. With its structure, the system provides a high degree of versatility and allows the implementation of parallel image and inertial processing algorithms. In order to illustrate the proposed approach, depth estimation with Kalman filtering implementation is carried out.

[1]  Roger W. Brockett,et al.  Robotic manipulators and the product of exponentials formula , 1984 .

[2]  O. Faugeras Three-dimensional computer vision: a geometric viewpoint , 1993 .

[3]  John Woodfill,et al.  Real-time stereo vision on the PARTS reconfigurable computer , 1997, Proceedings. The 5th Annual IEEE Symposium on Field-Programmable Custom Computing Machines Cat. No.97TB100186).

[4]  Noboru Ohnishi,et al.  The recovery of object shape and camera motion using a sensing system with a video camera and a gyro sensor , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[5]  François Berry,et al.  Design of a Hybrid Visuo-Inertial Smart Sensor , 2005 .

[6]  R Chellappa,et al.  Robust structure from motion estimation using inertial data. , 2001, Journal of the Optical Society of America. A, Optics, image science, and vision.

[7]  Sanjiv Singh,et al.  Optimal motion estimation from visual and inertial measurements , 2002, Sixth IEEE Workshop on Applications of Computer Vision, 2002. (WACV 2002). Proceedings..

[8]  Pietro Perona,et al.  Real-time 2-D feature detection on a reconfigurable computer , 1998, Proceedings. 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No.98CB36231).

[9]  Long Quan,et al.  Quasi-Dense Reconstruction from Image Sequence , 2002, ECCV.

[10]  Stephen M. Rock,et al.  Relative position estimation for manipulation tasks by fusing vision and inertial measurements , 2001, MTS/IEEE Oceans 2001. An Ocean Odyssey. Conference Proceedings (IEEE Cat. No.01CH37295).

[11]  Jorge Cabrera-Gámez,et al.  Reactive Computer Vision System with Reconfigurable Architecture , 1999, ICVS.

[12]  T. Başar,et al.  A New Approach to Linear Filtering and Prediction Problems , 2001 .

[13]  André DeHon,et al.  The Density Advantage of Configurable Computing , 2000, Computer.

[14]  Suya You,et al.  Fusion of vision and gyro tracking for robust augmented reality registration , 2001, Proceedings IEEE Virtual Reality 2001.

[15]  Thia Kirubarajan,et al.  Estimation with Applications to Tracking and Navigation: Theory, Algorithms and Software , 2001 .

[16]  Permalink Mapping a Single Assignment Programming Language to Reconfigurable Systems , 2002 .

[17]  Jorge Dias,et al.  Inertial Sensed Ego-motion for 3D Vision , 2004, J. Field Robotics.