Multi-focal feature tracking for a human-assisted mobile robot

In the project Autonomous City Explorer, an interactive robot is designed to find its way to a given destination in unknown urban environments by interacting with pedestrians. Considering applications in a human dominated environment, the robot can be sent to a destination by tracking a landmark selected by users and described by 2D image features. To achieve a natural landmark selection from the user perspective and an accurate feature tracking for a safe robot navigation, the robot preselects visual features and presents the users only the image regions providing higher tracking accuracy. Furthermore, a multi-focal camera system is used to extend the sensing range. SIFT, Harris corner and optical flow used for tracking and self-localization are compared and applied to different visual sensors. A coordination strategy is realized, in which the camera with wide field of view is used for robot orientation control and the high-resolution camera is applied for robot forward motion control. The performance is experimentally evaluated.

[1]  S. Schaal,et al.  Humanoid Oculomotor Control Based on Concepts of Computational Neuroscience , 2001 .

[2]  Hao Wu,et al.  Environment adapted active multi-focal vision system for object detection , 2009, 2009 IEEE International Conference on Robotics and Automation.

[3]  Tingting Xu,et al.  The Autonomous City Explorer project , 2009, 2009 IEEE International Conference on Robotics and Automation.

[4]  Matthijs C. Dorst Distinctive Image Features from Scale-Invariant Keypoints , 2011 .

[5]  Martin Buss,et al.  A multi-focal high-performance vision system , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[6]  Ronald Lumia,et al.  TRICLOPS: A tool for studying active vision , 2005, International Journal of Computer Vision.

[7]  Tingting Xu,et al.  The Autonomous City Explorer: Towards Natural Human-Robot Interaction in Urban Environments , 2009, Int. J. Soc. Robotics.

[8]  Dirk John,et al.  A mobile robot platform for assistance and entertainment , 2001 .

[9]  Artur M. Arsénio,et al.  Map Building from Human-Computer Interactions , 2004, 2004 Conference on Computer Vision and Pattern Recognition Workshop.

[10]  Michael D. Naish,et al.  Developing a Modular Active Spherical Vision System , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[11]  J.-Y. Bouguet,et al.  Pyramidal implementation of the lucas kanade feature tracker , 1999 .

[12]  Sven Wachsmuth,et al.  Active Vision-based Localization For Robots In A Home-Tour Scenario , 2007, ICVS 2007.

[13]  Gordon Cheng,et al.  Foveated vision systems with two cameras per eye , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[14]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[15]  Takayuki Kanda,et al.  Interactive Humanoid Robots for a Science Museum , 2007, IEEE Intell. Syst..

[16]  Illah R. Nourbakhsh,et al.  The mobot museum robot installations: a five year experiment , 2003, Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453).

[17]  E. D. Dickmanns,et al.  EMS-vision: gaze control in autonomous vehicles , 2000, Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No.00TH8511).

[18]  Kolja Kuhnlenz,et al.  Aspects of Multi-Focal Vision , 2007 .

[19]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[20]  David W. Murray,et al.  Tracking While Zooming Using Affine Transfer and Multifocal Tensors , 2004, International Journal of Computer Vision.