Cue integration for visual servoing

The robustness and reliability of vision algorithms is, nowadays, the key issue in robotic research and industrial applications. To control a robot in a closed-loop fashion, different tracking systems have been reported in the literature. A common approach to increased robustness of a tracking system is the use of different models (CAD model of the object, motion model) known a priori. Our hypothesis is that fusion of multiple features facilitates robust detection and tracking of objects in scenes of realistic complexity. A particular application is the estimation of a robot's end-effector position in a sequence of images. The research investigates the following two different approaches to cue integration: 1) voting and 2) fuzzy logic-based fusion. The two approaches have been tested in association with scenes of varying complexity. Experimental results clearly demonstrate that fusion of cues results in a tracking system with a robust performance. The robustness is in particular evident for scenes with multiple moving objects and partial occlusion of the tracked object.

[1]  C. Malsburg,et al.  Self-organized integration of adaptive visual cues for face tracking , 2000, Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580).

[2]  Ebrahim H. Mamdani,et al.  An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller , 1999, Int. J. Hum. Comput. Stud..

[3]  Peter K. Allen,et al.  Automated tracking and grasping of a moving object with a robotic hand-eye system , 1993, IEEE Trans. Robotics Autom..

[4]  A. Pinz,et al.  A new concept for active fusion in image understanding applying fuzzy set theory , 1996, Proceedings of IEEE 5th International Fuzzy Systems.

[5]  Peter I. Corke,et al.  A tutorial on visual servo control , 1996, IEEE Trans. Robotics Autom..

[6]  Ching Y. Suen,et al.  Application of majority voting to pattern recognition: an analysis of its behavior and performance , 1997, IEEE Trans. Syst. Man Cybern. Part A.

[7]  Gregory D. Hager,et al.  Calibration-free visual control using projective invariance , 1995, Proceedings of IEEE International Conference on Computer Vision.

[8]  Lotfi A. Zadeh,et al.  Outline of a New Approach to the Analysis of Complex Systems and Decision Processes , 1973, IEEE Trans. Syst. Man Cybern..

[9]  David J. Kriegman,et al.  Selecting promising landmarks , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[10]  Henrik I. Christensen,et al.  Application of voting to fusion of purposive modules: An experimental investigation , 1998, Robotics Auton. Syst..

[11]  Xu Zhang,et al.  Detection of moving corners in dynamic images , 1994, Proceedings of 1994 IEEE International Symposium on Industrial Electronics (ISIE'94).

[12]  Kurt Konolige,et al.  Small Vision Systems: Hardware and Implementation , 1998 .

[13]  Martin A. Fischler,et al.  Computational Stereo , 1982, CSUR.

[14]  Nikolaos Papanikolopoulos,et al.  Eye-in-hand robotic tasks in uncalibrated environments , 1997, IEEE Trans. Robotics Autom..

[15]  Azriel Rosenfeld,et al.  Cooperating Processes for Low-Level Vision: A Survey , 1981, Artif. Intell..

[16]  Isabelle Bloch Information combination operators for data fusion: a comparative review with classification , 1996, IEEE Trans. Syst. Man Cybern. Part A.

[17]  Carsten G. Bräutigam A model-free voting approach to cue integration , 1998 .

[18]  James J. Clark,et al.  Data Fusion for Sensory Information Processing Systems , 1990 .

[19]  D. Capson,et al.  Real-time template matching using cooperative windows , 1997, CCECE '97. Canadian Conference on Electrical and Computer Engineering. Engineering Innovation: Voyage of Discovery. Conference Proceedings.

[20]  Patrick Rives,et al.  Positioning of a robot with respect to an object, tracking it and estimating its velocity by visual servoing , 1991, Proceedings. 1991 IEEE International Conference on Robotics and Automation.

[21]  Ralf Koeppe,et al.  Advances in Robotics: The DLR Experience , 1999, Int. J. Robotics Res..

[22]  B. Parhami Voting algorithms , 1994 .

[23]  Nikolaos Papanikolopoulos,et al.  Adaptive robotic visual tracking: theory and experiments , 1993, IEEE Trans. Autom. Control..

[24]  Gregory D. Hager,et al.  X Vision: A Portable Substrate for Real-Time Vision Applications , 1998, Comput. Vis. Image Underst..

[25]  Andrew Blake,et al.  Visual Reconstruction , 1987, Deep Learning for EEG-Based Brain–Computer Interfaces.

[26]  Heinrich H. Bülthoff,et al.  Integration of Visual Modules , 1992 .

[27]  Ernst D. Dickmanns,et al.  An integrated spatio-temporal approach to automatic visual guidance of autonomous vehicles , 1990, IEEE Trans. Syst. Man Cybern..

[28]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[29]  K. Hashimoto,et al.  Visual servoing with redundant features , 1996, Proceedings of 35th IEEE Conference on Decision and Control.

[30]  Y.F. Li,et al.  Development of fuzzy algorithms for servo systems , 1989, IEEE Control Systems Magazine.