Visual Behaviors for Docking

This paper describes visual-based behaviors for docking operations in mobile robotics. Two different situations are presented: in theego-docking,each robot is equipped with a camera, and the motion is controlled when docking to a surface, whereas in theeco-docking,the camera and all the necessary computational resources are placed in a single external docking station, which may serve several robots. In both situations, the goal consists in controlling both the orientation, aligning the camera optical axis with the surface normal, and the approaching speed (slowing down during the maneuver). These goals are accomplished without any effort to perform 3D reconstruction of the environment or any need to calibrate the setup, in contrast with traditional approaches. Instead, we use image measurements directly to close the control loop of the mobile robot. In the approach we propose, the robot motion is directly driven by the first-order time-space image derivatives, which can be estimated robustly and fast. The docking system is operating in real time and the performance is robust both in theego-dockingandeco-dockingparadigms. Experiments are described.

[1]  Edward H. Adelson,et al.  Layered representation for motion analysis , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[2]  Giulio Sandini,et al.  Visual-Based Obstacle Detection A purposive approach using the normal flow , 1995 .

[3]  Roberto Cipolla,et al.  Robust structure from motion using motion parallax , 1993, 1993 (4th) International Conference on Computer Vision.

[4]  Dana H. Ballard,et al.  Animate Vision , 1991, Artif. Intell..

[5]  Y. J. Tejwani,et al.  Robot vision , 1989, IEEE International Symposium on Circuits and Systems,.

[6]  Hans-Hellmut Nagel,et al.  On the Estimation of Optical Flow: Relations between Different Approaches and Some New Results , 1987, Artif. Intell..

[7]  Cornelia Fermüller Global 3D motion estimation , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[8]  Hans-Hellmut Nagel,et al.  Displacement vectors derived from second-order intensity variations in image sequences , 1983, Comput. Vis. Graph. Image Process..

[9]  吉川 恒夫,et al.  Foundations of robotics : analysis and control , 1990 .

[10]  Michal Irani,et al.  Recovery of ego-motion using image stabilization , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[11]  J J Koenderink,et al.  Affine structure from motion. , 1991, Journal of the Optical Society of America. A, Optics and image science.

[12]  Yiannis Aloimonos,et al.  Purposive and qualitative active vision , 1990, [1990] Proceedings. 10th International Conference on Pattern Recognition.

[13]  S. Negahdaripour,et al.  Motion recovery from image sequences using First-order optical flow information , 1991, Proceedings of the IEEE Workshop on Visual Motion.

[14]  Yiannis Aloimonos,et al.  Active vision , 2004, International Journal of Computer Vision.

[15]  Allen M. Waxman,et al.  Closed from solutions to image flow equations for planar surfaces in motion , 1986, Comput. Vis. Graph. Image Process..

[16]  Patrick Bouthemy,et al.  Active Camera Self-orientation using Dynamic Image Parameters , 1994, ECCV.

[17]  Hans-Hellmut Nagel,et al.  An Investigation of Smoothness Constraints for the Estimation of Displacement Vector Fields from Image Sequences , 1983, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  A. Verri,et al.  Analysis of differential and matching methods for optical flow , 1989, [1989] Proceedings. Workshop on Visual Motion.

[19]  R. Bajcsy Active perception , 1988 .

[20]  Hans-Hellmut Nagel,et al.  Optical Flow Estimation: Advances and Comparisons , 1994, ECCV.

[21]  Giulio Sandini,et al.  Divergent stereo for robot navigation: learning from bees , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[22]  Berthold K. P. Horn,et al.  Determining Optical Flow , 1981, Other Conferences.

[23]  Andrew Blake,et al.  Surface Orientation and Time to Contact from Image Divergence and Deformation , 1992, ECCV.

[24]  V. Sundareswaran Egomotion from global flow field data , 1991, Proceedings of the IEEE Workshop on Visual Motion.

[25]  Patrick Rives,et al.  A new approach to visual servoing in robotics , 1992, IEEE Trans. Robotics Autom..