Visual Data Fusion for Objects Localization by Active Vision

Visual sensors provide exclusivelyuncertain and partial knowledge of a scene. In this article, we present a suitable scene knowledge representation that makes integration and fusion of new, uncertain and partial sensor measures possible. It is based on a mixture of stochastic and set membership models. We consider that, for a large class of applications, an approximated representation is sufficient to build a preliminary map of the scene. Our approximation mainlyresults in ellipsoidal calculus bymeans of a normal assumption for stochastic laws and ellipsoidal over or inner bounding for uniform laws. These approximations allow us to build an efficient estimation process integrating visual data on line. Based on this estimation scheme, optimal exploratorymotions of the camera can be automatically determined. Real time experimental results validating our approach are finally given.

[1]  Emo Welzl,et al.  Smallest enclosing disks (balls and ellipsoids) , 1991, New Results and New Trends in Computer Science.

[2]  Éric Marchand,et al.  Eye-in-hand/eye-to-hand cooperation for visual servoing , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[3]  J. Norton,et al.  State bounding with ellipsoidal set description of the uncertainty , 1996 .

[4]  H. Witsenhausen Sets of possible states of linear systems given perturbed observations , 1968 .

[5]  Éric Marchand,et al.  Active Vision for Complete Scene Reconstruction and Exploration , 1999, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Frank P. Ferrie,et al.  Viewpoint selection by navigation through entropy maps , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[7]  Simon Lacroix,et al.  Motion and Perception Strategies for Outdoor Mobile Robot Navigation in Unknown Environments , 1995, ISER.

[8]  C. Ian Connolly,et al.  The determination of next best views , 1985, Proceedings. 1985 IEEE International Conference on Robotics and Automation.

[9]  Hugh Durrant-Whyte,et al.  Integration, coordination, and control of multi-sensor robot systems , 1987 .

[10]  Frank P. Ferrie,et al.  Autonomous exploration: driven by uncertainty , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[11]  François Chaumette,et al.  Visual Data Fusion : Application to Objects Localization and Exploration , 2001 .

[12]  Jean-Marc Odobez,et al.  Robust Multiresolution Estimation of Parametric Motion Models , 1995, J. Vis. Commun. Image Represent..

[13]  Konstantinos A. Tarabanis,et al.  A survey of sensor planning in computer vision , 1995, IEEE Trans. Robotics Autom..

[14]  Thomas Malzbender,et al.  A Survey of Methods for Volumetric Scene Reconstruction from Photographs , 2001, VG.

[15]  Vladimir J. Lumelsky,et al.  Provable strategies for vision-guided exploration in three dimensions , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[16]  Nicholas Ayache,et al.  Artificial vision for mobile robots - stereo vision and multisensory perception , 1991 .

[17]  F. Schweppe Recursive state estimation: Unknown but bounded errors and system inputs , 1967 .

[18]  D. Marr,et al.  Representation and recognition of the spatial organization of three-dimensional shapes , 1978, Proceedings of the Royal Society of London. Series B. Biological Sciences.